00:00:00.001 Started by upstream project "autotest-per-patch" build number 127202 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "jbp-per-patch" build number 24392 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.110 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:08.688 The recommended git tool is: git 00:00:08.688 using credential 00000000-0000-0000-0000-000000000002 00:00:08.690 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:08.701 Fetching changes from the remote Git repository 00:00:08.702 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:08.712 Using shallow fetch with depth 1 00:00:08.712 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:08.712 > git --version # timeout=10 00:00:08.722 > git --version # 'git version 2.39.2' 00:00:08.722 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:08.732 Setting http proxy: proxy-dmz.intel.com:911 00:00:08.732 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/41/22241/27 # timeout=5 00:00:15.880 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:15.895 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:15.906 Checking out Revision 8e3d0a641438ef1859b112e62ae4b8ed8ac77a6b (FETCH_HEAD) 00:00:15.906 > git config core.sparsecheckout # timeout=10 00:00:15.919 > git read-tree -mu HEAD # timeout=10 00:00:15.935 > git checkout -f 8e3d0a641438ef1859b112e62ae4b8ed8ac77a6b # timeout=5 00:00:15.955 Commit message: "jenkins/jjb-config: Add release-build jobs to per-patch and nightly" 00:00:15.955 > git rev-list --no-walk 791f075b28e8faf0ee0c5232e81530917a02ab8d # timeout=10 00:00:16.069 [Pipeline] Start of Pipeline 00:00:16.082 [Pipeline] library 00:00:16.083 Loading library shm_lib@master 00:00:16.083 Library shm_lib@master is cached. Copying from home. 00:00:16.096 [Pipeline] node 00:00:16.105 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:16.106 [Pipeline] { 00:00:16.115 [Pipeline] catchError 00:00:16.117 [Pipeline] { 00:00:16.141 [Pipeline] wrap 00:00:16.219 [Pipeline] { 00:00:16.226 [Pipeline] stage 00:00:16.227 [Pipeline] { (Prologue) 00:00:16.385 [Pipeline] sh 00:00:16.671 + logger -p user.info -t JENKINS-CI 00:00:16.688 [Pipeline] echo 00:00:16.689 Node: WFP8 00:00:16.696 [Pipeline] sh 00:00:16.997 [Pipeline] setCustomBuildProperty 00:00:17.009 [Pipeline] echo 00:00:17.010 Cleanup processes 00:00:17.014 [Pipeline] sh 00:00:17.298 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:17.298 1167604 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:17.312 [Pipeline] sh 00:00:17.599 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:17.599 ++ grep -v 'sudo pgrep' 00:00:17.599 ++ awk '{print $1}' 00:00:17.599 + sudo kill -9 00:00:17.599 + true 00:00:17.615 [Pipeline] cleanWs 00:00:17.625 [WS-CLEANUP] Deleting project workspace... 00:00:17.625 [WS-CLEANUP] Deferred wipeout is used... 00:00:17.632 [WS-CLEANUP] done 00:00:17.637 [Pipeline] setCustomBuildProperty 00:00:17.654 [Pipeline] sh 00:00:17.941 + sudo git config --global --replace-all safe.directory '*' 00:00:18.025 [Pipeline] httpRequest 00:00:18.047 [Pipeline] echo 00:00:18.048 Sorcerer 10.211.164.101 is alive 00:00:18.055 [Pipeline] httpRequest 00:00:18.059 HttpMethod: GET 00:00:18.059 URL: http://10.211.164.101/packages/jbp_8e3d0a641438ef1859b112e62ae4b8ed8ac77a6b.tar.gz 00:00:18.060 Sending request to url: http://10.211.164.101/packages/jbp_8e3d0a641438ef1859b112e62ae4b8ed8ac77a6b.tar.gz 00:00:18.078 Response Code: HTTP/1.1 200 OK 00:00:18.078 Success: Status code 200 is in the accepted range: 200,404 00:00:18.079 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_8e3d0a641438ef1859b112e62ae4b8ed8ac77a6b.tar.gz 00:00:21.382 [Pipeline] sh 00:00:21.665 + tar --no-same-owner -xf jbp_8e3d0a641438ef1859b112e62ae4b8ed8ac77a6b.tar.gz 00:00:21.680 [Pipeline] httpRequest 00:00:21.710 [Pipeline] echo 00:00:21.711 Sorcerer 10.211.164.101 is alive 00:00:21.720 [Pipeline] httpRequest 00:00:21.725 HttpMethod: GET 00:00:21.725 URL: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:21.726 Sending request to url: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:21.747 Response Code: HTTP/1.1 200 OK 00:00:21.748 Success: Status code 200 is in the accepted range: 200,404 00:00:21.749 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:03:25.091 [Pipeline] sh 00:03:25.376 + tar --no-same-owner -xf spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:03:27.932 [Pipeline] sh 00:03:28.214 + git -C spdk log --oneline -n5 00:03:28.214 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:03:28.214 fc2398dfa raid: clear base bdev configure_cb after executing 00:03:28.214 5558f3f50 raid: complete bdev_raid_create after sb is written 00:03:28.214 d005e023b raid: fix empty slot not updated in sb after resize 00:03:28.214 f41dbc235 nvme: always specify CC_CSS_NVM when CAP_CSS_IOCS is not set 00:03:28.225 [Pipeline] } 00:03:28.241 [Pipeline] // stage 00:03:28.250 [Pipeline] stage 00:03:28.253 [Pipeline] { (Prepare) 00:03:28.272 [Pipeline] writeFile 00:03:28.290 [Pipeline] sh 00:03:28.571 + logger -p user.info -t JENKINS-CI 00:03:28.584 [Pipeline] sh 00:03:28.869 + logger -p user.info -t JENKINS-CI 00:03:28.883 [Pipeline] sh 00:03:29.164 + cat autorun-spdk.conf 00:03:29.164 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:29.164 SPDK_TEST_NVMF=1 00:03:29.164 SPDK_TEST_NVME_CLI=1 00:03:29.164 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:29.164 SPDK_TEST_NVMF_NICS=e810 00:03:29.164 SPDK_TEST_VFIOUSER=1 00:03:29.164 SPDK_RUN_UBSAN=1 00:03:29.164 NET_TYPE=phy 00:03:29.172 RUN_NIGHTLY=0 00:03:29.177 [Pipeline] readFile 00:03:29.205 [Pipeline] withEnv 00:03:29.207 [Pipeline] { 00:03:29.222 [Pipeline] sh 00:03:29.513 + set -ex 00:03:29.513 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:03:29.513 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:29.513 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:29.513 ++ SPDK_TEST_NVMF=1 00:03:29.513 ++ SPDK_TEST_NVME_CLI=1 00:03:29.513 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:29.513 ++ SPDK_TEST_NVMF_NICS=e810 00:03:29.513 ++ SPDK_TEST_VFIOUSER=1 00:03:29.513 ++ SPDK_RUN_UBSAN=1 00:03:29.513 ++ NET_TYPE=phy 00:03:29.513 ++ RUN_NIGHTLY=0 00:03:29.513 + case $SPDK_TEST_NVMF_NICS in 00:03:29.513 + DRIVERS=ice 00:03:29.513 + [[ tcp == \r\d\m\a ]] 00:03:29.513 + [[ -n ice ]] 00:03:29.513 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:03:29.513 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:03:29.513 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:03:29.513 rmmod: ERROR: Module irdma is not currently loaded 00:03:29.513 rmmod: ERROR: Module i40iw is not currently loaded 00:03:29.513 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:03:29.513 + true 00:03:29.513 + for D in $DRIVERS 00:03:29.513 + sudo modprobe ice 00:03:29.513 + exit 0 00:03:29.523 [Pipeline] } 00:03:29.542 [Pipeline] // withEnv 00:03:29.548 [Pipeline] } 00:03:29.565 [Pipeline] // stage 00:03:29.576 [Pipeline] catchError 00:03:29.579 [Pipeline] { 00:03:29.597 [Pipeline] timeout 00:03:29.597 Timeout set to expire in 50 min 00:03:29.599 [Pipeline] { 00:03:29.615 [Pipeline] stage 00:03:29.617 [Pipeline] { (Tests) 00:03:29.634 [Pipeline] sh 00:03:29.923 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:29.923 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:29.923 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:29.923 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:03:29.923 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:29.923 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:29.923 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:03:29.923 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:29.923 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:29.923 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:29.923 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:03:29.923 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:29.923 + source /etc/os-release 00:03:29.923 ++ NAME='Fedora Linux' 00:03:29.923 ++ VERSION='38 (Cloud Edition)' 00:03:29.923 ++ ID=fedora 00:03:29.923 ++ VERSION_ID=38 00:03:29.923 ++ VERSION_CODENAME= 00:03:29.923 ++ PLATFORM_ID=platform:f38 00:03:29.923 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:03:29.923 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:29.923 ++ LOGO=fedora-logo-icon 00:03:29.923 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:03:29.923 ++ HOME_URL=https://fedoraproject.org/ 00:03:29.923 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:03:29.923 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:29.923 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:29.923 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:29.923 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:03:29.923 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:29.923 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:03:29.923 ++ SUPPORT_END=2024-05-14 00:03:29.923 ++ VARIANT='Cloud Edition' 00:03:29.923 ++ VARIANT_ID=cloud 00:03:29.923 + uname -a 00:03:29.923 Linux spdk-wfp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:03:29.923 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:32.463 Hugepages 00:03:32.463 node hugesize free / total 00:03:32.463 node0 1048576kB 0 / 0 00:03:32.463 node0 2048kB 0 / 0 00:03:32.463 node1 1048576kB 0 / 0 00:03:32.463 node1 2048kB 0 / 0 00:03:32.463 00:03:32.463 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:32.463 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:32.463 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:32.463 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:32.463 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:32.463 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:32.463 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:32.463 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:32.463 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:32.463 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:32.463 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:32.463 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:32.463 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:32.463 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:32.463 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:32.463 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:32.463 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:32.463 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:32.463 + rm -f /tmp/spdk-ld-path 00:03:32.463 + source autorun-spdk.conf 00:03:32.463 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:32.463 ++ SPDK_TEST_NVMF=1 00:03:32.463 ++ SPDK_TEST_NVME_CLI=1 00:03:32.463 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:32.463 ++ SPDK_TEST_NVMF_NICS=e810 00:03:32.463 ++ SPDK_TEST_VFIOUSER=1 00:03:32.463 ++ SPDK_RUN_UBSAN=1 00:03:32.463 ++ NET_TYPE=phy 00:03:32.463 ++ RUN_NIGHTLY=0 00:03:32.463 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:32.463 + [[ -n '' ]] 00:03:32.463 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:32.463 + for M in /var/spdk/build-*-manifest.txt 00:03:32.463 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:32.463 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:32.463 + for M in /var/spdk/build-*-manifest.txt 00:03:32.463 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:32.463 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:32.463 ++ uname 00:03:32.463 + [[ Linux == \L\i\n\u\x ]] 00:03:32.463 + sudo dmesg -T 00:03:32.463 + sudo dmesg --clear 00:03:32.463 + dmesg_pid=1169063 00:03:32.463 + [[ Fedora Linux == FreeBSD ]] 00:03:32.463 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:32.463 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:32.463 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:32.463 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:03:32.463 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:03:32.463 + sudo dmesg -Tw 00:03:32.464 + [[ -x /usr/src/fio-static/fio ]] 00:03:32.464 + export FIO_BIN=/usr/src/fio-static/fio 00:03:32.464 + FIO_BIN=/usr/src/fio-static/fio 00:03:32.464 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:32.464 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:32.464 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:32.464 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:32.464 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:32.464 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:32.464 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:32.464 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:32.464 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:32.464 Test configuration: 00:03:32.464 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:32.464 SPDK_TEST_NVMF=1 00:03:32.464 SPDK_TEST_NVME_CLI=1 00:03:32.464 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:32.464 SPDK_TEST_NVMF_NICS=e810 00:03:32.464 SPDK_TEST_VFIOUSER=1 00:03:32.464 SPDK_RUN_UBSAN=1 00:03:32.464 NET_TYPE=phy 00:03:32.464 RUN_NIGHTLY=0 10:51:51 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:32.464 10:51:51 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:32.464 10:51:51 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:32.464 10:51:51 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:32.464 10:51:51 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:32.464 10:51:51 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:32.464 10:51:51 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:32.464 10:51:51 -- paths/export.sh@5 -- $ export PATH 00:03:32.464 10:51:51 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:32.464 10:51:51 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:32.464 10:51:51 -- common/autobuild_common.sh@447 -- $ date +%s 00:03:32.464 10:51:51 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721983911.XXXXXX 00:03:32.464 10:51:51 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721983911.fWb4lr 00:03:32.464 10:51:51 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:03:32.464 10:51:51 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:03:32.464 10:51:51 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:03:32.464 10:51:51 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:03:32.464 10:51:51 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:03:32.464 10:51:51 -- common/autobuild_common.sh@463 -- $ get_config_params 00:03:32.464 10:51:51 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:03:32.464 10:51:51 -- common/autotest_common.sh@10 -- $ set +x 00:03:32.724 10:51:51 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:03:32.724 10:51:51 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:03:32.724 10:51:51 -- pm/common@17 -- $ local monitor 00:03:32.724 10:51:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:32.724 10:51:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:32.724 10:51:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:32.724 10:51:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:32.724 10:51:51 -- pm/common@25 -- $ sleep 1 00:03:32.724 10:51:51 -- pm/common@21 -- $ date +%s 00:03:32.724 10:51:51 -- pm/common@21 -- $ date +%s 00:03:32.724 10:51:51 -- pm/common@21 -- $ date +%s 00:03:32.724 10:51:51 -- pm/common@21 -- $ date +%s 00:03:32.724 10:51:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721983911 00:03:32.724 10:51:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721983911 00:03:32.724 10:51:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721983911 00:03:32.724 10:51:51 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721983911 00:03:32.724 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721983911_collect-cpu-temp.pm.log 00:03:32.724 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721983911_collect-vmstat.pm.log 00:03:32.724 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721983911_collect-cpu-load.pm.log 00:03:32.724 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721983911_collect-bmc-pm.bmc.pm.log 00:03:33.664 10:51:52 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:03:33.664 10:51:52 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:33.664 10:51:52 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:33.664 10:51:52 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:33.664 10:51:52 -- spdk/autobuild.sh@16 -- $ date -u 00:03:33.664 Fri Jul 26 08:51:52 AM UTC 2024 00:03:33.664 10:51:52 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:33.664 v24.09-pre-321-g704257090 00:03:33.664 10:51:52 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:33.664 10:51:52 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:33.664 10:51:52 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:33.664 10:51:52 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:33.664 10:51:52 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:33.664 10:51:52 -- common/autotest_common.sh@10 -- $ set +x 00:03:33.664 ************************************ 00:03:33.664 START TEST ubsan 00:03:33.664 ************************************ 00:03:33.664 10:51:53 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:03:33.664 using ubsan 00:03:33.664 00:03:33.664 real 0m0.000s 00:03:33.664 user 0m0.000s 00:03:33.664 sys 0m0.000s 00:03:33.664 10:51:53 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:33.664 10:51:53 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:33.664 ************************************ 00:03:33.664 END TEST ubsan 00:03:33.664 ************************************ 00:03:33.664 10:51:53 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:33.664 10:51:53 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:33.664 10:51:53 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:33.664 10:51:53 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:33.664 10:51:53 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:33.664 10:51:53 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:33.664 10:51:53 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:33.664 10:51:53 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:33.664 10:51:53 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:03:33.924 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:33.924 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:33.924 Using 'verbs' RDMA provider 00:03:47.157 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:57.155 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:57.155 Creating mk/config.mk...done. 00:03:57.155 Creating mk/cc.flags.mk...done. 00:03:57.155 Type 'make' to build. 00:03:57.155 10:52:16 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:03:57.155 10:52:16 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:57.155 10:52:16 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:57.155 10:52:16 -- common/autotest_common.sh@10 -- $ set +x 00:03:57.155 ************************************ 00:03:57.155 START TEST make 00:03:57.155 ************************************ 00:03:57.155 10:52:16 make -- common/autotest_common.sh@1125 -- $ make -j96 00:03:57.414 make[1]: Nothing to be done for 'all'. 00:03:58.798 The Meson build system 00:03:58.798 Version: 1.3.1 00:03:58.798 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:58.798 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:58.798 Build type: native build 00:03:58.798 Project name: libvfio-user 00:03:58.798 Project version: 0.0.1 00:03:58.798 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:58.798 C linker for the host machine: cc ld.bfd 2.39-16 00:03:58.798 Host machine cpu family: x86_64 00:03:58.798 Host machine cpu: x86_64 00:03:58.798 Run-time dependency threads found: YES 00:03:58.798 Library dl found: YES 00:03:58.798 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:58.798 Run-time dependency json-c found: YES 0.17 00:03:58.798 Run-time dependency cmocka found: YES 1.1.7 00:03:58.798 Program pytest-3 found: NO 00:03:58.798 Program flake8 found: NO 00:03:58.798 Program misspell-fixer found: NO 00:03:58.798 Program restructuredtext-lint found: NO 00:03:58.798 Program valgrind found: YES (/usr/bin/valgrind) 00:03:58.798 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:58.798 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:58.798 Compiler for C supports arguments -Wwrite-strings: YES 00:03:58.798 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:58.798 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:58.798 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:58.798 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:58.798 Build targets in project: 8 00:03:58.798 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:58.798 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:58.798 00:03:58.798 libvfio-user 0.0.1 00:03:58.798 00:03:58.798 User defined options 00:03:58.798 buildtype : debug 00:03:58.798 default_library: shared 00:03:58.798 libdir : /usr/local/lib 00:03:58.798 00:03:58.798 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:59.364 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:59.364 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:59.364 [2/37] Compiling C object samples/null.p/null.c.o 00:03:59.364 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:59.364 [4/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:59.364 [5/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:59.364 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:59.364 [7/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:59.364 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:59.364 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:59.364 [10/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:59.364 [11/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:59.364 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:59.364 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:59.364 [14/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:59.364 [15/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:59.364 [16/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:59.364 [17/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:59.364 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:59.364 [19/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:59.364 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:59.364 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:59.364 [22/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:59.364 [23/37] Compiling C object samples/server.p/server.c.o 00:03:59.364 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:59.364 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:59.364 [26/37] Compiling C object samples/client.p/client.c.o 00:03:59.364 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:59.365 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:59.365 [29/37] Linking target samples/client 00:03:59.365 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:03:59.365 [31/37] Linking target test/unit_tests 00:03:59.623 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:59.623 [33/37] Linking target samples/gpio-pci-idio-16 00:03:59.623 [34/37] Linking target samples/server 00:03:59.623 [35/37] Linking target samples/null 00:03:59.623 [36/37] Linking target samples/shadow_ioeventfd_server 00:03:59.623 [37/37] Linking target samples/lspci 00:03:59.623 INFO: autodetecting backend as ninja 00:03:59.623 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:59.623 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:59.882 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:59.882 ninja: no work to do. 00:04:05.161 The Meson build system 00:04:05.161 Version: 1.3.1 00:04:05.161 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:04:05.161 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:04:05.161 Build type: native build 00:04:05.161 Program cat found: YES (/usr/bin/cat) 00:04:05.161 Project name: DPDK 00:04:05.161 Project version: 24.03.0 00:04:05.161 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:04:05.161 C linker for the host machine: cc ld.bfd 2.39-16 00:04:05.161 Host machine cpu family: x86_64 00:04:05.161 Host machine cpu: x86_64 00:04:05.161 Message: ## Building in Developer Mode ## 00:04:05.161 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:05.161 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:04:05.161 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:05.161 Program python3 found: YES (/usr/bin/python3) 00:04:05.161 Program cat found: YES (/usr/bin/cat) 00:04:05.161 Compiler for C supports arguments -march=native: YES 00:04:05.161 Checking for size of "void *" : 8 00:04:05.161 Checking for size of "void *" : 8 (cached) 00:04:05.161 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:04:05.161 Library m found: YES 00:04:05.161 Library numa found: YES 00:04:05.161 Has header "numaif.h" : YES 00:04:05.161 Library fdt found: NO 00:04:05.161 Library execinfo found: NO 00:04:05.161 Has header "execinfo.h" : YES 00:04:05.161 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:04:05.161 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:05.161 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:05.161 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:05.161 Run-time dependency openssl found: YES 3.0.9 00:04:05.161 Run-time dependency libpcap found: YES 1.10.4 00:04:05.161 Has header "pcap.h" with dependency libpcap: YES 00:04:05.161 Compiler for C supports arguments -Wcast-qual: YES 00:04:05.161 Compiler for C supports arguments -Wdeprecated: YES 00:04:05.161 Compiler for C supports arguments -Wformat: YES 00:04:05.161 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:05.161 Compiler for C supports arguments -Wformat-security: NO 00:04:05.161 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:05.161 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:05.161 Compiler for C supports arguments -Wnested-externs: YES 00:04:05.161 Compiler for C supports arguments -Wold-style-definition: YES 00:04:05.162 Compiler for C supports arguments -Wpointer-arith: YES 00:04:05.162 Compiler for C supports arguments -Wsign-compare: YES 00:04:05.162 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:05.162 Compiler for C supports arguments -Wundef: YES 00:04:05.162 Compiler for C supports arguments -Wwrite-strings: YES 00:04:05.162 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:05.162 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:05.162 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:05.162 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:05.162 Program objdump found: YES (/usr/bin/objdump) 00:04:05.162 Compiler for C supports arguments -mavx512f: YES 00:04:05.162 Checking if "AVX512 checking" compiles: YES 00:04:05.162 Fetching value of define "__SSE4_2__" : 1 00:04:05.162 Fetching value of define "__AES__" : 1 00:04:05.162 Fetching value of define "__AVX__" : 1 00:04:05.162 Fetching value of define "__AVX2__" : 1 00:04:05.162 Fetching value of define "__AVX512BW__" : 1 00:04:05.162 Fetching value of define "__AVX512CD__" : 1 00:04:05.162 Fetching value of define "__AVX512DQ__" : 1 00:04:05.162 Fetching value of define "__AVX512F__" : 1 00:04:05.162 Fetching value of define "__AVX512VL__" : 1 00:04:05.162 Fetching value of define "__PCLMUL__" : 1 00:04:05.162 Fetching value of define "__RDRND__" : 1 00:04:05.162 Fetching value of define "__RDSEED__" : 1 00:04:05.162 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:05.162 Fetching value of define "__znver1__" : (undefined) 00:04:05.162 Fetching value of define "__znver2__" : (undefined) 00:04:05.162 Fetching value of define "__znver3__" : (undefined) 00:04:05.162 Fetching value of define "__znver4__" : (undefined) 00:04:05.162 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:05.162 Message: lib/log: Defining dependency "log" 00:04:05.162 Message: lib/kvargs: Defining dependency "kvargs" 00:04:05.162 Message: lib/telemetry: Defining dependency "telemetry" 00:04:05.162 Checking for function "getentropy" : NO 00:04:05.162 Message: lib/eal: Defining dependency "eal" 00:04:05.162 Message: lib/ring: Defining dependency "ring" 00:04:05.162 Message: lib/rcu: Defining dependency "rcu" 00:04:05.162 Message: lib/mempool: Defining dependency "mempool" 00:04:05.162 Message: lib/mbuf: Defining dependency "mbuf" 00:04:05.162 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:05.162 Fetching value of define "__AVX512F__" : 1 (cached) 00:04:05.162 Fetching value of define "__AVX512BW__" : 1 (cached) 00:04:05.162 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:04:05.162 Fetching value of define "__AVX512VL__" : 1 (cached) 00:04:05.162 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:04:05.162 Compiler for C supports arguments -mpclmul: YES 00:04:05.162 Compiler for C supports arguments -maes: YES 00:04:05.162 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:05.162 Compiler for C supports arguments -mavx512bw: YES 00:04:05.162 Compiler for C supports arguments -mavx512dq: YES 00:04:05.162 Compiler for C supports arguments -mavx512vl: YES 00:04:05.162 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:05.162 Compiler for C supports arguments -mavx2: YES 00:04:05.162 Compiler for C supports arguments -mavx: YES 00:04:05.162 Message: lib/net: Defining dependency "net" 00:04:05.162 Message: lib/meter: Defining dependency "meter" 00:04:05.162 Message: lib/ethdev: Defining dependency "ethdev" 00:04:05.162 Message: lib/pci: Defining dependency "pci" 00:04:05.162 Message: lib/cmdline: Defining dependency "cmdline" 00:04:05.162 Message: lib/hash: Defining dependency "hash" 00:04:05.162 Message: lib/timer: Defining dependency "timer" 00:04:05.162 Message: lib/compressdev: Defining dependency "compressdev" 00:04:05.162 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:05.162 Message: lib/dmadev: Defining dependency "dmadev" 00:04:05.162 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:05.162 Message: lib/power: Defining dependency "power" 00:04:05.162 Message: lib/reorder: Defining dependency "reorder" 00:04:05.162 Message: lib/security: Defining dependency "security" 00:04:05.162 Has header "linux/userfaultfd.h" : YES 00:04:05.162 Has header "linux/vduse.h" : YES 00:04:05.162 Message: lib/vhost: Defining dependency "vhost" 00:04:05.162 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:05.162 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:05.162 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:05.162 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:05.162 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:05.162 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:05.162 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:05.162 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:05.162 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:05.162 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:05.162 Program doxygen found: YES (/usr/bin/doxygen) 00:04:05.162 Configuring doxy-api-html.conf using configuration 00:04:05.162 Configuring doxy-api-man.conf using configuration 00:04:05.162 Program mandb found: YES (/usr/bin/mandb) 00:04:05.162 Program sphinx-build found: NO 00:04:05.162 Configuring rte_build_config.h using configuration 00:04:05.162 Message: 00:04:05.162 ================= 00:04:05.162 Applications Enabled 00:04:05.162 ================= 00:04:05.162 00:04:05.162 apps: 00:04:05.162 00:04:05.162 00:04:05.162 Message: 00:04:05.162 ================= 00:04:05.162 Libraries Enabled 00:04:05.162 ================= 00:04:05.162 00:04:05.162 libs: 00:04:05.162 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:05.162 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:05.162 cryptodev, dmadev, power, reorder, security, vhost, 00:04:05.162 00:04:05.162 Message: 00:04:05.162 =============== 00:04:05.162 Drivers Enabled 00:04:05.162 =============== 00:04:05.162 00:04:05.162 common: 00:04:05.162 00:04:05.162 bus: 00:04:05.162 pci, vdev, 00:04:05.162 mempool: 00:04:05.162 ring, 00:04:05.162 dma: 00:04:05.162 00:04:05.162 net: 00:04:05.162 00:04:05.162 crypto: 00:04:05.162 00:04:05.162 compress: 00:04:05.162 00:04:05.162 vdpa: 00:04:05.162 00:04:05.162 00:04:05.162 Message: 00:04:05.162 ================= 00:04:05.162 Content Skipped 00:04:05.162 ================= 00:04:05.162 00:04:05.162 apps: 00:04:05.162 dumpcap: explicitly disabled via build config 00:04:05.162 graph: explicitly disabled via build config 00:04:05.162 pdump: explicitly disabled via build config 00:04:05.162 proc-info: explicitly disabled via build config 00:04:05.162 test-acl: explicitly disabled via build config 00:04:05.162 test-bbdev: explicitly disabled via build config 00:04:05.162 test-cmdline: explicitly disabled via build config 00:04:05.162 test-compress-perf: explicitly disabled via build config 00:04:05.162 test-crypto-perf: explicitly disabled via build config 00:04:05.162 test-dma-perf: explicitly disabled via build config 00:04:05.162 test-eventdev: explicitly disabled via build config 00:04:05.162 test-fib: explicitly disabled via build config 00:04:05.162 test-flow-perf: explicitly disabled via build config 00:04:05.162 test-gpudev: explicitly disabled via build config 00:04:05.162 test-mldev: explicitly disabled via build config 00:04:05.162 test-pipeline: explicitly disabled via build config 00:04:05.162 test-pmd: explicitly disabled via build config 00:04:05.162 test-regex: explicitly disabled via build config 00:04:05.162 test-sad: explicitly disabled via build config 00:04:05.162 test-security-perf: explicitly disabled via build config 00:04:05.162 00:04:05.162 libs: 00:04:05.162 argparse: explicitly disabled via build config 00:04:05.162 metrics: explicitly disabled via build config 00:04:05.162 acl: explicitly disabled via build config 00:04:05.162 bbdev: explicitly disabled via build config 00:04:05.162 bitratestats: explicitly disabled via build config 00:04:05.162 bpf: explicitly disabled via build config 00:04:05.162 cfgfile: explicitly disabled via build config 00:04:05.162 distributor: explicitly disabled via build config 00:04:05.162 efd: explicitly disabled via build config 00:04:05.162 eventdev: explicitly disabled via build config 00:04:05.162 dispatcher: explicitly disabled via build config 00:04:05.162 gpudev: explicitly disabled via build config 00:04:05.162 gro: explicitly disabled via build config 00:04:05.162 gso: explicitly disabled via build config 00:04:05.162 ip_frag: explicitly disabled via build config 00:04:05.162 jobstats: explicitly disabled via build config 00:04:05.162 latencystats: explicitly disabled via build config 00:04:05.162 lpm: explicitly disabled via build config 00:04:05.162 member: explicitly disabled via build config 00:04:05.162 pcapng: explicitly disabled via build config 00:04:05.162 rawdev: explicitly disabled via build config 00:04:05.162 regexdev: explicitly disabled via build config 00:04:05.162 mldev: explicitly disabled via build config 00:04:05.162 rib: explicitly disabled via build config 00:04:05.162 sched: explicitly disabled via build config 00:04:05.162 stack: explicitly disabled via build config 00:04:05.162 ipsec: explicitly disabled via build config 00:04:05.162 pdcp: explicitly disabled via build config 00:04:05.162 fib: explicitly disabled via build config 00:04:05.162 port: explicitly disabled via build config 00:04:05.162 pdump: explicitly disabled via build config 00:04:05.162 table: explicitly disabled via build config 00:04:05.162 pipeline: explicitly disabled via build config 00:04:05.162 graph: explicitly disabled via build config 00:04:05.162 node: explicitly disabled via build config 00:04:05.162 00:04:05.162 drivers: 00:04:05.162 common/cpt: not in enabled drivers build config 00:04:05.162 common/dpaax: not in enabled drivers build config 00:04:05.162 common/iavf: not in enabled drivers build config 00:04:05.162 common/idpf: not in enabled drivers build config 00:04:05.162 common/ionic: not in enabled drivers build config 00:04:05.162 common/mvep: not in enabled drivers build config 00:04:05.163 common/octeontx: not in enabled drivers build config 00:04:05.163 bus/auxiliary: not in enabled drivers build config 00:04:05.163 bus/cdx: not in enabled drivers build config 00:04:05.163 bus/dpaa: not in enabled drivers build config 00:04:05.163 bus/fslmc: not in enabled drivers build config 00:04:05.163 bus/ifpga: not in enabled drivers build config 00:04:05.163 bus/platform: not in enabled drivers build config 00:04:05.163 bus/uacce: not in enabled drivers build config 00:04:05.163 bus/vmbus: not in enabled drivers build config 00:04:05.163 common/cnxk: not in enabled drivers build config 00:04:05.163 common/mlx5: not in enabled drivers build config 00:04:05.163 common/nfp: not in enabled drivers build config 00:04:05.163 common/nitrox: not in enabled drivers build config 00:04:05.163 common/qat: not in enabled drivers build config 00:04:05.163 common/sfc_efx: not in enabled drivers build config 00:04:05.163 mempool/bucket: not in enabled drivers build config 00:04:05.163 mempool/cnxk: not in enabled drivers build config 00:04:05.163 mempool/dpaa: not in enabled drivers build config 00:04:05.163 mempool/dpaa2: not in enabled drivers build config 00:04:05.163 mempool/octeontx: not in enabled drivers build config 00:04:05.163 mempool/stack: not in enabled drivers build config 00:04:05.163 dma/cnxk: not in enabled drivers build config 00:04:05.163 dma/dpaa: not in enabled drivers build config 00:04:05.163 dma/dpaa2: not in enabled drivers build config 00:04:05.163 dma/hisilicon: not in enabled drivers build config 00:04:05.163 dma/idxd: not in enabled drivers build config 00:04:05.163 dma/ioat: not in enabled drivers build config 00:04:05.163 dma/skeleton: not in enabled drivers build config 00:04:05.163 net/af_packet: not in enabled drivers build config 00:04:05.163 net/af_xdp: not in enabled drivers build config 00:04:05.163 net/ark: not in enabled drivers build config 00:04:05.163 net/atlantic: not in enabled drivers build config 00:04:05.163 net/avp: not in enabled drivers build config 00:04:05.163 net/axgbe: not in enabled drivers build config 00:04:05.163 net/bnx2x: not in enabled drivers build config 00:04:05.163 net/bnxt: not in enabled drivers build config 00:04:05.163 net/bonding: not in enabled drivers build config 00:04:05.163 net/cnxk: not in enabled drivers build config 00:04:05.163 net/cpfl: not in enabled drivers build config 00:04:05.163 net/cxgbe: not in enabled drivers build config 00:04:05.163 net/dpaa: not in enabled drivers build config 00:04:05.163 net/dpaa2: not in enabled drivers build config 00:04:05.163 net/e1000: not in enabled drivers build config 00:04:05.163 net/ena: not in enabled drivers build config 00:04:05.163 net/enetc: not in enabled drivers build config 00:04:05.163 net/enetfec: not in enabled drivers build config 00:04:05.163 net/enic: not in enabled drivers build config 00:04:05.163 net/failsafe: not in enabled drivers build config 00:04:05.163 net/fm10k: not in enabled drivers build config 00:04:05.163 net/gve: not in enabled drivers build config 00:04:05.163 net/hinic: not in enabled drivers build config 00:04:05.163 net/hns3: not in enabled drivers build config 00:04:05.163 net/i40e: not in enabled drivers build config 00:04:05.163 net/iavf: not in enabled drivers build config 00:04:05.163 net/ice: not in enabled drivers build config 00:04:05.163 net/idpf: not in enabled drivers build config 00:04:05.163 net/igc: not in enabled drivers build config 00:04:05.163 net/ionic: not in enabled drivers build config 00:04:05.163 net/ipn3ke: not in enabled drivers build config 00:04:05.163 net/ixgbe: not in enabled drivers build config 00:04:05.163 net/mana: not in enabled drivers build config 00:04:05.163 net/memif: not in enabled drivers build config 00:04:05.163 net/mlx4: not in enabled drivers build config 00:04:05.163 net/mlx5: not in enabled drivers build config 00:04:05.163 net/mvneta: not in enabled drivers build config 00:04:05.163 net/mvpp2: not in enabled drivers build config 00:04:05.163 net/netvsc: not in enabled drivers build config 00:04:05.163 net/nfb: not in enabled drivers build config 00:04:05.163 net/nfp: not in enabled drivers build config 00:04:05.163 net/ngbe: not in enabled drivers build config 00:04:05.163 net/null: not in enabled drivers build config 00:04:05.163 net/octeontx: not in enabled drivers build config 00:04:05.163 net/octeon_ep: not in enabled drivers build config 00:04:05.163 net/pcap: not in enabled drivers build config 00:04:05.163 net/pfe: not in enabled drivers build config 00:04:05.163 net/qede: not in enabled drivers build config 00:04:05.163 net/ring: not in enabled drivers build config 00:04:05.163 net/sfc: not in enabled drivers build config 00:04:05.163 net/softnic: not in enabled drivers build config 00:04:05.163 net/tap: not in enabled drivers build config 00:04:05.163 net/thunderx: not in enabled drivers build config 00:04:05.163 net/txgbe: not in enabled drivers build config 00:04:05.163 net/vdev_netvsc: not in enabled drivers build config 00:04:05.163 net/vhost: not in enabled drivers build config 00:04:05.163 net/virtio: not in enabled drivers build config 00:04:05.163 net/vmxnet3: not in enabled drivers build config 00:04:05.163 raw/*: missing internal dependency, "rawdev" 00:04:05.163 crypto/armv8: not in enabled drivers build config 00:04:05.163 crypto/bcmfs: not in enabled drivers build config 00:04:05.163 crypto/caam_jr: not in enabled drivers build config 00:04:05.163 crypto/ccp: not in enabled drivers build config 00:04:05.163 crypto/cnxk: not in enabled drivers build config 00:04:05.163 crypto/dpaa_sec: not in enabled drivers build config 00:04:05.163 crypto/dpaa2_sec: not in enabled drivers build config 00:04:05.163 crypto/ipsec_mb: not in enabled drivers build config 00:04:05.163 crypto/mlx5: not in enabled drivers build config 00:04:05.163 crypto/mvsam: not in enabled drivers build config 00:04:05.163 crypto/nitrox: not in enabled drivers build config 00:04:05.163 crypto/null: not in enabled drivers build config 00:04:05.163 crypto/octeontx: not in enabled drivers build config 00:04:05.163 crypto/openssl: not in enabled drivers build config 00:04:05.163 crypto/scheduler: not in enabled drivers build config 00:04:05.163 crypto/uadk: not in enabled drivers build config 00:04:05.163 crypto/virtio: not in enabled drivers build config 00:04:05.163 compress/isal: not in enabled drivers build config 00:04:05.163 compress/mlx5: not in enabled drivers build config 00:04:05.163 compress/nitrox: not in enabled drivers build config 00:04:05.163 compress/octeontx: not in enabled drivers build config 00:04:05.163 compress/zlib: not in enabled drivers build config 00:04:05.163 regex/*: missing internal dependency, "regexdev" 00:04:05.163 ml/*: missing internal dependency, "mldev" 00:04:05.163 vdpa/ifc: not in enabled drivers build config 00:04:05.163 vdpa/mlx5: not in enabled drivers build config 00:04:05.163 vdpa/nfp: not in enabled drivers build config 00:04:05.163 vdpa/sfc: not in enabled drivers build config 00:04:05.163 event/*: missing internal dependency, "eventdev" 00:04:05.163 baseband/*: missing internal dependency, "bbdev" 00:04:05.163 gpu/*: missing internal dependency, "gpudev" 00:04:05.163 00:04:05.163 00:04:05.422 Build targets in project: 85 00:04:05.422 00:04:05.422 DPDK 24.03.0 00:04:05.422 00:04:05.422 User defined options 00:04:05.422 buildtype : debug 00:04:05.422 default_library : shared 00:04:05.422 libdir : lib 00:04:05.422 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:04:05.422 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:05.422 c_link_args : 00:04:05.422 cpu_instruction_set: native 00:04:05.422 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:04:05.422 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:04:05.422 enable_docs : false 00:04:05.422 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:04:05.422 enable_kmods : false 00:04:05.422 max_lcores : 128 00:04:05.422 tests : false 00:04:05.422 00:04:05.422 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:05.681 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:04:05.945 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:05.945 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:05.945 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:05.945 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:05.945 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:05.945 [6/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:05.945 [7/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:05.945 [8/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:05.945 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:05.945 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:05.945 [11/268] Linking static target lib/librte_kvargs.a 00:04:05.945 [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:05.945 [13/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:05.945 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:05.945 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:05.945 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:05.945 [17/268] Linking static target lib/librte_log.a 00:04:05.945 [18/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:05.945 [19/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:06.208 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:06.208 [21/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:06.208 [22/268] Linking static target lib/librte_pci.a 00:04:06.208 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:06.208 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:06.208 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:06.208 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:06.468 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:06.468 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:06.468 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:06.468 [30/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:06.468 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:06.468 [32/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:06.468 [33/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:06.468 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:06.468 [35/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:06.468 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:06.468 [37/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:06.468 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:06.468 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:06.468 [40/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:06.468 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:06.468 [42/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:06.468 [43/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:06.468 [44/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:06.468 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:06.468 [46/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:06.468 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:06.468 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:06.468 [49/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:06.468 [50/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:06.468 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:06.468 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:06.468 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:06.468 [54/268] Linking static target lib/librte_meter.a 00:04:06.468 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:06.468 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:06.468 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:06.468 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:06.468 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:06.468 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:06.468 [61/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:06.468 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:06.468 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:06.468 [64/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:06.468 [65/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:06.468 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:06.468 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:06.468 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:06.468 [69/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:06.468 [70/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:06.468 [71/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:06.468 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:06.468 [73/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:06.468 [74/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:06.468 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:06.468 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:06.468 [77/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:06.468 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:06.468 [79/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:06.468 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:06.468 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:06.468 [82/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:06.468 [83/268] Linking static target lib/librte_telemetry.a 00:04:06.468 [84/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:06.468 [85/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:06.468 [86/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:06.468 [87/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:06.468 [88/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:06.468 [89/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:06.468 [90/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:06.468 [91/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:06.468 [92/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:06.468 [93/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:06.468 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:06.468 [95/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:06.468 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:06.468 [97/268] Linking static target lib/librte_ring.a 00:04:06.468 [98/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:06.468 [99/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:06.468 [100/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:06.468 [101/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:06.468 [102/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:06.727 [103/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:06.727 [104/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:06.727 [105/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:06.727 [106/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:06.727 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:06.727 [108/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:06.727 [109/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:06.727 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:06.727 [111/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:06.727 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:06.727 [113/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:06.727 [114/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:06.727 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:06.727 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:06.727 [117/268] Linking static target lib/librte_rcu.a 00:04:06.727 [118/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:06.727 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:06.727 [120/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:06.727 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:06.727 [122/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:06.727 [123/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:06.727 [124/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:06.727 [125/268] Linking static target lib/librte_mempool.a 00:04:06.727 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:06.727 [127/268] Linking static target lib/librte_net.a 00:04:06.727 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:06.727 [129/268] Linking static target lib/librte_eal.a 00:04:06.727 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:06.727 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:06.727 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:06.727 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:06.727 [134/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:06.727 [135/268] Linking static target lib/librte_cmdline.a 00:04:06.727 [136/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:06.727 [137/268] Linking target lib/librte_log.so.24.1 00:04:06.727 [138/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:06.727 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:06.727 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:06.727 [141/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:06.727 [142/268] Linking static target lib/librte_mbuf.a 00:04:06.986 [143/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:06.986 [144/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:06.986 [145/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:06.986 [146/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:06.986 [147/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:06.986 [148/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:06.986 [149/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:06.986 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:06.986 [151/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:06.986 [152/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:06.986 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:06.986 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:06.986 [155/268] Linking target lib/librte_telemetry.so.24.1 00:04:06.986 [156/268] Linking target lib/librte_kvargs.so.24.1 00:04:06.986 [157/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:06.986 [158/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:06.986 [159/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:06.986 [160/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:06.986 [161/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:06.986 [162/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:06.986 [163/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:06.986 [164/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:06.986 [165/268] Linking static target lib/librte_timer.a 00:04:06.986 [166/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:06.986 [167/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:06.986 [168/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:06.986 [169/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:06.986 [170/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:06.986 [171/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:06.986 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:06.986 [173/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:06.986 [174/268] Linking static target lib/librte_compressdev.a 00:04:06.986 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:06.986 [176/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:06.986 [177/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:06.986 [178/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:06.986 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:06.986 [180/268] Linking static target lib/librte_security.a 00:04:06.986 [181/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:06.986 [182/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:06.986 [183/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:06.986 [184/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:06.986 [185/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:06.986 [186/268] Linking static target lib/librte_dmadev.a 00:04:06.986 [187/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:07.245 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:07.245 [189/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:07.246 [190/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:07.246 [191/268] Linking static target lib/librte_power.a 00:04:07.246 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:07.246 [193/268] Linking static target lib/librte_hash.a 00:04:07.246 [194/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:07.246 [195/268] Linking static target lib/librte_reorder.a 00:04:07.246 [196/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:07.246 [197/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:07.246 [198/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:07.246 [199/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:07.246 [200/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:07.246 [201/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:07.246 [202/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:07.246 [203/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:07.246 [204/268] Linking static target drivers/librte_bus_pci.a 00:04:07.246 [205/268] Linking static target drivers/librte_bus_vdev.a 00:04:07.246 [206/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:07.246 [207/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:07.505 [208/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:07.505 [209/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:07.505 [210/268] Linking static target lib/librte_cryptodev.a 00:04:07.505 [211/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:07.506 [212/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:07.506 [213/268] Linking static target drivers/librte_mempool_ring.a 00:04:07.506 [214/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:07.506 [215/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:07.506 [216/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:07.506 [217/268] Linking static target lib/librte_ethdev.a 00:04:07.506 [218/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:07.765 [219/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:07.765 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:07.765 [221/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:07.765 [222/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:07.765 [223/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:07.765 [224/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.024 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.024 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.025 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.963 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:08.963 [229/268] Linking static target lib/librte_vhost.a 00:04:09.224 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.606 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:15.918 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:15.918 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:16.179 [234/268] Linking target lib/librte_eal.so.24.1 00:04:16.179 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:16.179 [236/268] Linking target lib/librte_pci.so.24.1 00:04:16.179 [237/268] Linking target lib/librte_ring.so.24.1 00:04:16.179 [238/268] Linking target lib/librte_timer.so.24.1 00:04:16.179 [239/268] Linking target lib/librte_meter.so.24.1 00:04:16.179 [240/268] Linking target lib/librte_dmadev.so.24.1 00:04:16.179 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:16.439 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:16.439 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:16.439 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:16.439 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:16.439 [246/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:16.439 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:16.439 [248/268] Linking target lib/librte_mempool.so.24.1 00:04:16.439 [249/268] Linking target lib/librte_rcu.so.24.1 00:04:16.439 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:16.439 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:16.699 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:16.699 [253/268] Linking target lib/librte_mbuf.so.24.1 00:04:16.699 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:16.699 [255/268] Linking target lib/librte_net.so.24.1 00:04:16.699 [256/268] Linking target lib/librte_reorder.so.24.1 00:04:16.699 [257/268] Linking target lib/librte_compressdev.so.24.1 00:04:16.699 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:04:16.959 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:16.959 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:16.959 [261/268] Linking target lib/librte_hash.so.24.1 00:04:16.959 [262/268] Linking target lib/librte_security.so.24.1 00:04:16.959 [263/268] Linking target lib/librte_cmdline.so.24.1 00:04:16.959 [264/268] Linking target lib/librte_ethdev.so.24.1 00:04:16.959 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:17.219 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:17.219 [267/268] Linking target lib/librte_power.so.24.1 00:04:17.219 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:17.219 INFO: autodetecting backend as ninja 00:04:17.219 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:04:18.159 CC lib/ut_mock/mock.o 00:04:18.159 CC lib/log/log.o 00:04:18.159 CC lib/log/log_flags.o 00:04:18.159 CC lib/log/log_deprecated.o 00:04:18.159 CC lib/ut/ut.o 00:04:18.159 LIB libspdk_ut_mock.a 00:04:18.159 LIB libspdk_ut.a 00:04:18.159 LIB libspdk_log.a 00:04:18.418 SO libspdk_ut_mock.so.6.0 00:04:18.418 SO libspdk_log.so.7.0 00:04:18.418 SO libspdk_ut.so.2.0 00:04:18.418 SYMLINK libspdk_ut_mock.so 00:04:18.418 SYMLINK libspdk_log.so 00:04:18.418 SYMLINK libspdk_ut.so 00:04:18.677 CC lib/ioat/ioat.o 00:04:18.677 CC lib/util/base64.o 00:04:18.677 CC lib/dma/dma.o 00:04:18.677 CC lib/util/bit_array.o 00:04:18.677 CC lib/util/cpuset.o 00:04:18.677 CC lib/util/crc32c.o 00:04:18.677 CC lib/util/crc16.o 00:04:18.677 CC lib/util/crc32.o 00:04:18.677 CC lib/util/crc32_ieee.o 00:04:18.677 CC lib/util/crc64.o 00:04:18.677 CC lib/util/dif.o 00:04:18.677 CC lib/util/fd.o 00:04:18.677 CC lib/util/hexlify.o 00:04:18.678 CC lib/util/fd_group.o 00:04:18.678 CC lib/util/file.o 00:04:18.678 CC lib/util/iov.o 00:04:18.678 CC lib/util/math.o 00:04:18.678 CC lib/util/net.o 00:04:18.678 CC lib/util/pipe.o 00:04:18.678 CC lib/util/strerror_tls.o 00:04:18.678 CC lib/util/string.o 00:04:18.678 CC lib/util/xor.o 00:04:18.678 CC lib/util/uuid.o 00:04:18.678 CC lib/util/zipf.o 00:04:18.678 CXX lib/trace_parser/trace.o 00:04:18.936 CC lib/vfio_user/host/vfio_user_pci.o 00:04:18.936 CC lib/vfio_user/host/vfio_user.o 00:04:18.936 LIB libspdk_dma.a 00:04:18.937 SO libspdk_dma.so.4.0 00:04:18.937 LIB libspdk_ioat.a 00:04:18.937 SO libspdk_ioat.so.7.0 00:04:18.937 SYMLINK libspdk_dma.so 00:04:18.937 LIB libspdk_vfio_user.a 00:04:18.937 SYMLINK libspdk_ioat.so 00:04:18.937 SO libspdk_vfio_user.so.5.0 00:04:19.196 SYMLINK libspdk_vfio_user.so 00:04:19.196 LIB libspdk_util.a 00:04:19.196 SO libspdk_util.so.10.0 00:04:19.196 SYMLINK libspdk_util.so 00:04:19.457 LIB libspdk_trace_parser.a 00:04:19.457 SO libspdk_trace_parser.so.5.0 00:04:19.457 SYMLINK libspdk_trace_parser.so 00:04:19.717 CC lib/conf/conf.o 00:04:19.717 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:19.717 CC lib/rdma_provider/common.o 00:04:19.717 CC lib/json/json_parse.o 00:04:19.717 CC lib/json/json_util.o 00:04:19.717 CC lib/json/json_write.o 00:04:19.717 CC lib/idxd/idxd.o 00:04:19.717 CC lib/idxd/idxd_user.o 00:04:19.717 CC lib/rdma_utils/rdma_utils.o 00:04:19.717 CC lib/vmd/vmd.o 00:04:19.717 CC lib/idxd/idxd_kernel.o 00:04:19.717 CC lib/vmd/led.o 00:04:19.717 CC lib/env_dpdk/env.o 00:04:19.717 CC lib/env_dpdk/memory.o 00:04:19.717 CC lib/env_dpdk/pci.o 00:04:19.717 CC lib/env_dpdk/init.o 00:04:19.717 CC lib/env_dpdk/threads.o 00:04:19.717 CC lib/env_dpdk/pci_ioat.o 00:04:19.717 CC lib/env_dpdk/pci_virtio.o 00:04:19.717 CC lib/env_dpdk/pci_vmd.o 00:04:19.717 CC lib/env_dpdk/pci_idxd.o 00:04:19.717 CC lib/env_dpdk/pci_event.o 00:04:19.717 CC lib/env_dpdk/sigbus_handler.o 00:04:19.717 CC lib/env_dpdk/pci_dpdk.o 00:04:19.717 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:19.717 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:19.717 LIB libspdk_rdma_provider.a 00:04:19.717 SO libspdk_rdma_provider.so.6.0 00:04:19.717 LIB libspdk_conf.a 00:04:19.977 LIB libspdk_rdma_utils.a 00:04:19.977 SO libspdk_conf.so.6.0 00:04:19.977 LIB libspdk_json.a 00:04:19.977 SYMLINK libspdk_rdma_provider.so 00:04:19.977 SO libspdk_rdma_utils.so.1.0 00:04:19.977 SO libspdk_json.so.6.0 00:04:19.977 SYMLINK libspdk_conf.so 00:04:19.977 SYMLINK libspdk_rdma_utils.so 00:04:19.977 SYMLINK libspdk_json.so 00:04:19.977 LIB libspdk_idxd.a 00:04:20.237 SO libspdk_idxd.so.12.0 00:04:20.237 LIB libspdk_vmd.a 00:04:20.237 SO libspdk_vmd.so.6.0 00:04:20.237 SYMLINK libspdk_idxd.so 00:04:20.237 SYMLINK libspdk_vmd.so 00:04:20.237 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:20.237 CC lib/jsonrpc/jsonrpc_server.o 00:04:20.237 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:20.237 CC lib/jsonrpc/jsonrpc_client.o 00:04:20.498 LIB libspdk_jsonrpc.a 00:04:20.498 SO libspdk_jsonrpc.so.6.0 00:04:20.498 SYMLINK libspdk_jsonrpc.so 00:04:20.758 LIB libspdk_env_dpdk.a 00:04:20.758 SO libspdk_env_dpdk.so.15.0 00:04:20.758 SYMLINK libspdk_env_dpdk.so 00:04:20.758 CC lib/rpc/rpc.o 00:04:21.018 LIB libspdk_rpc.a 00:04:21.018 SO libspdk_rpc.so.6.0 00:04:21.018 SYMLINK libspdk_rpc.so 00:04:21.278 CC lib/keyring/keyring.o 00:04:21.278 CC lib/keyring/keyring_rpc.o 00:04:21.538 CC lib/trace/trace.o 00:04:21.538 CC lib/trace/trace_flags.o 00:04:21.538 CC lib/trace/trace_rpc.o 00:04:21.538 CC lib/notify/notify_rpc.o 00:04:21.539 CC lib/notify/notify.o 00:04:21.539 LIB libspdk_notify.a 00:04:21.539 LIB libspdk_keyring.a 00:04:21.539 SO libspdk_notify.so.6.0 00:04:21.539 SO libspdk_keyring.so.1.0 00:04:21.539 LIB libspdk_trace.a 00:04:21.539 SYMLINK libspdk_notify.so 00:04:21.539 SO libspdk_trace.so.10.0 00:04:21.799 SYMLINK libspdk_keyring.so 00:04:21.799 SYMLINK libspdk_trace.so 00:04:22.059 CC lib/thread/thread.o 00:04:22.059 CC lib/thread/iobuf.o 00:04:22.059 CC lib/sock/sock.o 00:04:22.059 CC lib/sock/sock_rpc.o 00:04:22.318 LIB libspdk_sock.a 00:04:22.318 SO libspdk_sock.so.10.0 00:04:22.318 SYMLINK libspdk_sock.so 00:04:22.887 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:22.887 CC lib/nvme/nvme_fabric.o 00:04:22.887 CC lib/nvme/nvme_ctrlr.o 00:04:22.887 CC lib/nvme/nvme_ns_cmd.o 00:04:22.887 CC lib/nvme/nvme_ns.o 00:04:22.887 CC lib/nvme/nvme_qpair.o 00:04:22.887 CC lib/nvme/nvme_pcie_common.o 00:04:22.887 CC lib/nvme/nvme_pcie.o 00:04:22.887 CC lib/nvme/nvme.o 00:04:22.887 CC lib/nvme/nvme_quirks.o 00:04:22.887 CC lib/nvme/nvme_transport.o 00:04:22.887 CC lib/nvme/nvme_discovery.o 00:04:22.887 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:22.887 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:22.887 CC lib/nvme/nvme_tcp.o 00:04:22.887 CC lib/nvme/nvme_opal.o 00:04:22.887 CC lib/nvme/nvme_io_msg.o 00:04:22.887 CC lib/nvme/nvme_poll_group.o 00:04:22.887 CC lib/nvme/nvme_zns.o 00:04:22.887 CC lib/nvme/nvme_stubs.o 00:04:22.887 CC lib/nvme/nvme_auth.o 00:04:22.887 CC lib/nvme/nvme_cuse.o 00:04:22.887 CC lib/nvme/nvme_vfio_user.o 00:04:22.887 CC lib/nvme/nvme_rdma.o 00:04:23.147 LIB libspdk_thread.a 00:04:23.147 SO libspdk_thread.so.10.1 00:04:23.147 SYMLINK libspdk_thread.so 00:04:23.407 CC lib/vfu_tgt/tgt_endpoint.o 00:04:23.407 CC lib/vfu_tgt/tgt_rpc.o 00:04:23.407 CC lib/blob/zeroes.o 00:04:23.407 CC lib/blob/blobstore.o 00:04:23.407 CC lib/blob/request.o 00:04:23.407 CC lib/accel/accel_rpc.o 00:04:23.407 CC lib/blob/blob_bs_dev.o 00:04:23.407 CC lib/accel/accel.o 00:04:23.407 CC lib/accel/accel_sw.o 00:04:23.407 CC lib/virtio/virtio.o 00:04:23.407 CC lib/virtio/virtio_vhost_user.o 00:04:23.407 CC lib/virtio/virtio_vfio_user.o 00:04:23.407 CC lib/virtio/virtio_pci.o 00:04:23.407 CC lib/init/json_config.o 00:04:23.407 CC lib/init/subsystem.o 00:04:23.407 CC lib/init/subsystem_rpc.o 00:04:23.407 CC lib/init/rpc.o 00:04:23.667 LIB libspdk_init.a 00:04:23.667 LIB libspdk_vfu_tgt.a 00:04:23.667 SO libspdk_init.so.5.0 00:04:23.667 LIB libspdk_virtio.a 00:04:23.667 SO libspdk_vfu_tgt.so.3.0 00:04:23.667 SO libspdk_virtio.so.7.0 00:04:23.667 SYMLINK libspdk_init.so 00:04:23.927 SYMLINK libspdk_vfu_tgt.so 00:04:23.927 SYMLINK libspdk_virtio.so 00:04:24.186 CC lib/event/app.o 00:04:24.186 CC lib/event/reactor.o 00:04:24.186 CC lib/event/log_rpc.o 00:04:24.186 CC lib/event/app_rpc.o 00:04:24.186 CC lib/event/scheduler_static.o 00:04:24.186 LIB libspdk_accel.a 00:04:24.186 SO libspdk_accel.so.16.0 00:04:24.186 SYMLINK libspdk_accel.so 00:04:24.446 LIB libspdk_nvme.a 00:04:24.446 LIB libspdk_event.a 00:04:24.446 SO libspdk_nvme.so.13.1 00:04:24.446 SO libspdk_event.so.14.0 00:04:24.446 SYMLINK libspdk_event.so 00:04:24.706 CC lib/bdev/bdev.o 00:04:24.706 CC lib/bdev/bdev_rpc.o 00:04:24.707 CC lib/bdev/bdev_zone.o 00:04:24.707 CC lib/bdev/part.o 00:04:24.707 CC lib/bdev/scsi_nvme.o 00:04:24.707 SYMLINK libspdk_nvme.so 00:04:25.648 LIB libspdk_blob.a 00:04:25.648 SO libspdk_blob.so.11.0 00:04:25.648 SYMLINK libspdk_blob.so 00:04:25.908 CC lib/blobfs/blobfs.o 00:04:25.908 CC lib/blobfs/tree.o 00:04:25.908 CC lib/lvol/lvol.o 00:04:26.478 LIB libspdk_bdev.a 00:04:26.478 SO libspdk_bdev.so.16.0 00:04:26.478 SYMLINK libspdk_bdev.so 00:04:26.478 LIB libspdk_blobfs.a 00:04:26.478 SO libspdk_blobfs.so.10.0 00:04:26.478 LIB libspdk_lvol.a 00:04:26.737 SYMLINK libspdk_blobfs.so 00:04:26.737 SO libspdk_lvol.so.10.0 00:04:26.737 SYMLINK libspdk_lvol.so 00:04:26.737 CC lib/nbd/nbd.o 00:04:26.737 CC lib/nbd/nbd_rpc.o 00:04:26.737 CC lib/ublk/ublk.o 00:04:26.737 CC lib/ublk/ublk_rpc.o 00:04:26.737 CC lib/ftl/ftl_core.o 00:04:26.737 CC lib/ftl/ftl_init.o 00:04:26.737 CC lib/ftl/ftl_debug.o 00:04:26.737 CC lib/ftl/ftl_layout.o 00:04:26.737 CC lib/ftl/ftl_io.o 00:04:26.737 CC lib/ftl/ftl_sb.o 00:04:26.737 CC lib/scsi/dev.o 00:04:26.737 CC lib/ftl/ftl_l2p.o 00:04:26.737 CC lib/ftl/ftl_l2p_flat.o 00:04:26.737 CC lib/ftl/ftl_nv_cache.o 00:04:26.737 CC lib/scsi/lun.o 00:04:26.737 CC lib/nvmf/ctrlr_discovery.o 00:04:26.737 CC lib/ftl/ftl_band.o 00:04:26.737 CC lib/scsi/port.o 00:04:26.737 CC lib/nvmf/ctrlr.o 00:04:26.737 CC lib/nvmf/ctrlr_bdev.o 00:04:26.737 CC lib/ftl/ftl_band_ops.o 00:04:26.737 CC lib/scsi/scsi.o 00:04:26.737 CC lib/ftl/ftl_writer.o 00:04:26.737 CC lib/scsi/scsi_bdev.o 00:04:26.737 CC lib/scsi/scsi_rpc.o 00:04:26.737 CC lib/ftl/ftl_rq.o 00:04:26.737 CC lib/nvmf/subsystem.o 00:04:26.737 CC lib/scsi/scsi_pr.o 00:04:26.737 CC lib/ftl/ftl_reloc.o 00:04:26.737 CC lib/nvmf/nvmf.o 00:04:26.737 CC lib/ftl/ftl_l2p_cache.o 00:04:26.737 CC lib/nvmf/nvmf_rpc.o 00:04:26.737 CC lib/nvmf/tcp.o 00:04:26.737 CC lib/scsi/task.o 00:04:26.737 CC lib/ftl/ftl_p2l.o 00:04:26.737 CC lib/nvmf/transport.o 00:04:26.737 CC lib/ftl/mngt/ftl_mngt.o 00:04:26.737 CC lib/nvmf/stubs.o 00:04:26.737 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:26.737 CC lib/nvmf/mdns_server.o 00:04:26.737 CC lib/nvmf/vfio_user.o 00:04:26.737 CC lib/nvmf/rdma.o 00:04:26.737 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:26.737 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:26.737 CC lib/nvmf/auth.o 00:04:26.737 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:26.737 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:26.737 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:26.737 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:26.737 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:26.737 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:26.737 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:26.737 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:26.737 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:26.737 CC lib/ftl/utils/ftl_conf.o 00:04:26.737 CC lib/ftl/utils/ftl_mempool.o 00:04:26.737 CC lib/ftl/utils/ftl_md.o 00:04:26.737 CC lib/ftl/utils/ftl_bitmap.o 00:04:26.737 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:26.737 CC lib/ftl/utils/ftl_property.o 00:04:26.737 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:26.737 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:26.737 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:26.737 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:26.737 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:26.737 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:26.738 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:26.738 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:26.738 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:26.738 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:26.738 CC lib/ftl/base/ftl_base_dev.o 00:04:26.738 CC lib/ftl/base/ftl_base_bdev.o 00:04:26.738 CC lib/ftl/ftl_trace.o 00:04:27.304 LIB libspdk_nbd.a 00:04:27.304 SO libspdk_nbd.so.7.0 00:04:27.304 SYMLINK libspdk_nbd.so 00:04:27.563 LIB libspdk_ublk.a 00:04:27.563 LIB libspdk_scsi.a 00:04:27.563 SO libspdk_ublk.so.3.0 00:04:27.563 SO libspdk_scsi.so.9.0 00:04:27.563 SYMLINK libspdk_ublk.so 00:04:27.563 SYMLINK libspdk_scsi.so 00:04:27.563 LIB libspdk_ftl.a 00:04:27.822 SO libspdk_ftl.so.9.0 00:04:27.822 CC lib/vhost/vhost.o 00:04:27.822 CC lib/vhost/vhost_rpc.o 00:04:27.822 CC lib/vhost/vhost_blk.o 00:04:27.822 CC lib/vhost/vhost_scsi.o 00:04:27.822 CC lib/vhost/rte_vhost_user.o 00:04:27.822 CC lib/iscsi/conn.o 00:04:27.822 CC lib/iscsi/init_grp.o 00:04:27.822 CC lib/iscsi/param.o 00:04:27.822 CC lib/iscsi/iscsi.o 00:04:27.822 CC lib/iscsi/portal_grp.o 00:04:27.822 CC lib/iscsi/md5.o 00:04:27.822 CC lib/iscsi/tgt_node.o 00:04:27.822 CC lib/iscsi/iscsi_subsystem.o 00:04:27.822 CC lib/iscsi/iscsi_rpc.o 00:04:27.822 CC lib/iscsi/task.o 00:04:28.082 SYMLINK libspdk_ftl.so 00:04:28.696 LIB libspdk_nvmf.a 00:04:28.696 SO libspdk_nvmf.so.19.0 00:04:28.696 LIB libspdk_vhost.a 00:04:28.696 SYMLINK libspdk_nvmf.so 00:04:28.696 SO libspdk_vhost.so.8.0 00:04:28.973 SYMLINK libspdk_vhost.so 00:04:28.973 LIB libspdk_iscsi.a 00:04:28.973 SO libspdk_iscsi.so.8.0 00:04:28.973 SYMLINK libspdk_iscsi.so 00:04:29.544 CC module/env_dpdk/env_dpdk_rpc.o 00:04:29.544 CC module/vfu_device/vfu_virtio.o 00:04:29.544 CC module/vfu_device/vfu_virtio_blk.o 00:04:29.544 CC module/vfu_device/vfu_virtio_scsi.o 00:04:29.544 CC module/vfu_device/vfu_virtio_rpc.o 00:04:29.804 CC module/accel/dsa/accel_dsa.o 00:04:29.804 CC module/accel/dsa/accel_dsa_rpc.o 00:04:29.804 CC module/blob/bdev/blob_bdev.o 00:04:29.804 CC module/keyring/linux/keyring_rpc.o 00:04:29.804 CC module/keyring/linux/keyring.o 00:04:29.804 CC module/sock/posix/posix.o 00:04:29.804 CC module/accel/ioat/accel_ioat.o 00:04:29.804 CC module/accel/ioat/accel_ioat_rpc.o 00:04:29.804 CC module/scheduler/gscheduler/gscheduler.o 00:04:29.804 CC module/accel/iaa/accel_iaa_rpc.o 00:04:29.804 CC module/accel/iaa/accel_iaa.o 00:04:29.804 LIB libspdk_env_dpdk_rpc.a 00:04:29.804 CC module/accel/error/accel_error.o 00:04:29.804 CC module/accel/error/accel_error_rpc.o 00:04:29.804 CC module/keyring/file/keyring.o 00:04:29.804 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:29.804 CC module/keyring/file/keyring_rpc.o 00:04:29.804 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:29.804 SO libspdk_env_dpdk_rpc.so.6.0 00:04:29.804 SYMLINK libspdk_env_dpdk_rpc.so 00:04:29.804 LIB libspdk_keyring_linux.a 00:04:29.804 LIB libspdk_scheduler_gscheduler.a 00:04:29.804 LIB libspdk_keyring_file.a 00:04:29.804 SO libspdk_keyring_linux.so.1.0 00:04:29.804 LIB libspdk_accel_error.a 00:04:29.804 LIB libspdk_scheduler_dpdk_governor.a 00:04:29.804 SO libspdk_scheduler_gscheduler.so.4.0 00:04:29.804 SO libspdk_keyring_file.so.1.0 00:04:29.804 LIB libspdk_accel_ioat.a 00:04:29.804 LIB libspdk_accel_iaa.a 00:04:29.804 LIB libspdk_scheduler_dynamic.a 00:04:29.804 SO libspdk_accel_error.so.2.0 00:04:29.804 LIB libspdk_accel_dsa.a 00:04:29.804 LIB libspdk_blob_bdev.a 00:04:29.804 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:29.804 SO libspdk_scheduler_dynamic.so.4.0 00:04:29.804 SO libspdk_accel_iaa.so.3.0 00:04:29.804 SYMLINK libspdk_keyring_linux.so 00:04:29.804 SO libspdk_accel_ioat.so.6.0 00:04:30.064 SO libspdk_accel_dsa.so.5.0 00:04:30.064 SYMLINK libspdk_scheduler_gscheduler.so 00:04:30.064 SYMLINK libspdk_keyring_file.so 00:04:30.064 SO libspdk_blob_bdev.so.11.0 00:04:30.064 SYMLINK libspdk_accel_error.so 00:04:30.064 SYMLINK libspdk_scheduler_dynamic.so 00:04:30.064 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:30.064 SYMLINK libspdk_accel_iaa.so 00:04:30.064 SYMLINK libspdk_blob_bdev.so 00:04:30.064 SYMLINK libspdk_accel_ioat.so 00:04:30.064 SYMLINK libspdk_accel_dsa.so 00:04:30.064 LIB libspdk_vfu_device.a 00:04:30.064 SO libspdk_vfu_device.so.3.0 00:04:30.064 SYMLINK libspdk_vfu_device.so 00:04:30.323 LIB libspdk_sock_posix.a 00:04:30.323 SO libspdk_sock_posix.so.6.0 00:04:30.323 SYMLINK libspdk_sock_posix.so 00:04:30.323 CC module/bdev/error/vbdev_error.o 00:04:30.323 CC module/bdev/error/vbdev_error_rpc.o 00:04:30.323 CC module/bdev/passthru/vbdev_passthru.o 00:04:30.323 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:30.323 CC module/bdev/lvol/vbdev_lvol.o 00:04:30.323 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:30.323 CC module/bdev/delay/vbdev_delay.o 00:04:30.323 CC module/bdev/split/vbdev_split.o 00:04:30.323 CC module/bdev/ftl/bdev_ftl.o 00:04:30.323 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:30.323 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:30.323 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:30.323 CC module/bdev/split/vbdev_split_rpc.o 00:04:30.323 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:30.323 CC module/bdev/raid/bdev_raid.o 00:04:30.323 CC module/bdev/raid/bdev_raid_rpc.o 00:04:30.323 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:30.323 CC module/bdev/raid/raid0.o 00:04:30.323 CC module/bdev/raid/raid1.o 00:04:30.323 CC module/bdev/gpt/gpt.o 00:04:30.323 CC module/bdev/raid/bdev_raid_sb.o 00:04:30.323 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:30.323 CC module/blobfs/bdev/blobfs_bdev.o 00:04:30.323 CC module/bdev/raid/concat.o 00:04:30.323 CC module/bdev/malloc/bdev_malloc.o 00:04:30.323 CC module/bdev/gpt/vbdev_gpt.o 00:04:30.323 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:30.323 CC module/bdev/null/bdev_null.o 00:04:30.323 CC module/bdev/null/bdev_null_rpc.o 00:04:30.323 CC module/bdev/nvme/nvme_rpc.o 00:04:30.323 CC module/bdev/nvme/bdev_nvme.o 00:04:30.323 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:30.323 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:30.323 CC module/bdev/nvme/bdev_mdns_client.o 00:04:30.323 CC module/bdev/nvme/vbdev_opal.o 00:04:30.323 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:30.323 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:30.323 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:30.323 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:30.323 CC module/bdev/iscsi/bdev_iscsi.o 00:04:30.582 CC module/bdev/aio/bdev_aio.o 00:04:30.582 CC module/bdev/aio/bdev_aio_rpc.o 00:04:30.582 LIB libspdk_bdev_split.a 00:04:30.582 LIB libspdk_blobfs_bdev.a 00:04:30.582 LIB libspdk_bdev_error.a 00:04:30.841 SO libspdk_blobfs_bdev.so.6.0 00:04:30.841 LIB libspdk_bdev_ftl.a 00:04:30.841 SO libspdk_bdev_error.so.6.0 00:04:30.841 LIB libspdk_bdev_gpt.a 00:04:30.841 SO libspdk_bdev_split.so.6.0 00:04:30.841 LIB libspdk_bdev_passthru.a 00:04:30.841 LIB libspdk_bdev_null.a 00:04:30.841 SO libspdk_bdev_gpt.so.6.0 00:04:30.841 SO libspdk_bdev_ftl.so.6.0 00:04:30.841 SYMLINK libspdk_blobfs_bdev.so 00:04:30.841 SO libspdk_bdev_null.so.6.0 00:04:30.841 SO libspdk_bdev_passthru.so.6.0 00:04:30.841 SYMLINK libspdk_bdev_error.so 00:04:30.841 SYMLINK libspdk_bdev_split.so 00:04:30.841 LIB libspdk_bdev_zone_block.a 00:04:30.841 SYMLINK libspdk_bdev_gpt.so 00:04:30.841 SYMLINK libspdk_bdev_ftl.so 00:04:30.841 LIB libspdk_bdev_aio.a 00:04:30.841 SO libspdk_bdev_zone_block.so.6.0 00:04:30.841 LIB libspdk_bdev_malloc.a 00:04:30.842 LIB libspdk_bdev_delay.a 00:04:30.842 SYMLINK libspdk_bdev_passthru.so 00:04:30.842 LIB libspdk_bdev_iscsi.a 00:04:30.842 SYMLINK libspdk_bdev_null.so 00:04:30.842 SO libspdk_bdev_aio.so.6.0 00:04:30.842 SO libspdk_bdev_delay.so.6.0 00:04:30.842 SO libspdk_bdev_malloc.so.6.0 00:04:30.842 SO libspdk_bdev_iscsi.so.6.0 00:04:30.842 SYMLINK libspdk_bdev_zone_block.so 00:04:30.842 LIB libspdk_bdev_lvol.a 00:04:30.842 SYMLINK libspdk_bdev_aio.so 00:04:30.842 LIB libspdk_bdev_virtio.a 00:04:30.842 SYMLINK libspdk_bdev_delay.so 00:04:30.842 SYMLINK libspdk_bdev_malloc.so 00:04:30.842 SYMLINK libspdk_bdev_iscsi.so 00:04:30.842 SO libspdk_bdev_lvol.so.6.0 00:04:30.842 SO libspdk_bdev_virtio.so.6.0 00:04:31.101 SYMLINK libspdk_bdev_lvol.so 00:04:31.101 SYMLINK libspdk_bdev_virtio.so 00:04:31.101 LIB libspdk_bdev_raid.a 00:04:31.361 SO libspdk_bdev_raid.so.6.0 00:04:31.361 SYMLINK libspdk_bdev_raid.so 00:04:31.928 LIB libspdk_bdev_nvme.a 00:04:32.187 SO libspdk_bdev_nvme.so.7.0 00:04:32.187 SYMLINK libspdk_bdev_nvme.so 00:04:32.756 CC module/event/subsystems/iobuf/iobuf.o 00:04:32.756 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:32.756 CC module/event/subsystems/scheduler/scheduler.o 00:04:32.756 CC module/event/subsystems/sock/sock.o 00:04:32.756 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:32.756 CC module/event/subsystems/vmd/vmd.o 00:04:32.756 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:32.756 CC module/event/subsystems/keyring/keyring.o 00:04:32.756 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:33.016 LIB libspdk_event_iobuf.a 00:04:33.016 LIB libspdk_event_vfu_tgt.a 00:04:33.016 LIB libspdk_event_scheduler.a 00:04:33.016 LIB libspdk_event_sock.a 00:04:33.016 SO libspdk_event_vfu_tgt.so.3.0 00:04:33.016 LIB libspdk_event_keyring.a 00:04:33.016 SO libspdk_event_iobuf.so.3.0 00:04:33.016 SO libspdk_event_sock.so.5.0 00:04:33.016 SO libspdk_event_scheduler.so.4.0 00:04:33.016 LIB libspdk_event_vhost_blk.a 00:04:33.016 LIB libspdk_event_vmd.a 00:04:33.016 SO libspdk_event_keyring.so.1.0 00:04:33.016 SO libspdk_event_vhost_blk.so.3.0 00:04:33.016 SO libspdk_event_vmd.so.6.0 00:04:33.016 SYMLINK libspdk_event_vfu_tgt.so 00:04:33.016 SYMLINK libspdk_event_sock.so 00:04:33.016 SYMLINK libspdk_event_scheduler.so 00:04:33.016 SYMLINK libspdk_event_iobuf.so 00:04:33.016 SYMLINK libspdk_event_keyring.so 00:04:33.016 SYMLINK libspdk_event_vhost_blk.so 00:04:33.016 SYMLINK libspdk_event_vmd.so 00:04:33.276 CC module/event/subsystems/accel/accel.o 00:04:33.276 LIB libspdk_event_accel.a 00:04:33.536 SO libspdk_event_accel.so.6.0 00:04:33.536 SYMLINK libspdk_event_accel.so 00:04:33.795 CC module/event/subsystems/bdev/bdev.o 00:04:34.056 LIB libspdk_event_bdev.a 00:04:34.056 SO libspdk_event_bdev.so.6.0 00:04:34.056 SYMLINK libspdk_event_bdev.so 00:04:34.316 CC module/event/subsystems/ublk/ublk.o 00:04:34.316 CC module/event/subsystems/scsi/scsi.o 00:04:34.316 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:34.316 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:34.316 CC module/event/subsystems/nbd/nbd.o 00:04:34.575 LIB libspdk_event_ublk.a 00:04:34.575 LIB libspdk_event_nbd.a 00:04:34.575 LIB libspdk_event_scsi.a 00:04:34.575 SO libspdk_event_ublk.so.3.0 00:04:34.575 SO libspdk_event_nbd.so.6.0 00:04:34.575 SO libspdk_event_scsi.so.6.0 00:04:34.575 LIB libspdk_event_nvmf.a 00:04:34.575 SYMLINK libspdk_event_ublk.so 00:04:34.575 SYMLINK libspdk_event_nbd.so 00:04:34.575 SO libspdk_event_nvmf.so.6.0 00:04:34.575 SYMLINK libspdk_event_scsi.so 00:04:34.575 SYMLINK libspdk_event_nvmf.so 00:04:34.833 CC module/event/subsystems/iscsi/iscsi.o 00:04:34.833 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:35.093 LIB libspdk_event_iscsi.a 00:04:35.093 LIB libspdk_event_vhost_scsi.a 00:04:35.093 SO libspdk_event_vhost_scsi.so.3.0 00:04:35.093 SO libspdk_event_iscsi.so.6.0 00:04:35.093 SYMLINK libspdk_event_vhost_scsi.so 00:04:35.093 SYMLINK libspdk_event_iscsi.so 00:04:35.353 SO libspdk.so.6.0 00:04:35.353 SYMLINK libspdk.so 00:04:35.612 CC app/trace_record/trace_record.o 00:04:35.612 CC app/spdk_nvme_discover/discovery_aer.o 00:04:35.612 CXX app/trace/trace.o 00:04:35.612 CC app/spdk_top/spdk_top.o 00:04:35.612 CC app/spdk_nvme_identify/identify.o 00:04:35.612 CC app/spdk_lspci/spdk_lspci.o 00:04:35.612 CC app/spdk_nvme_perf/perf.o 00:04:35.612 CC test/rpc_client/rpc_client_test.o 00:04:35.612 TEST_HEADER include/spdk/accel.h 00:04:35.612 TEST_HEADER include/spdk/accel_module.h 00:04:35.612 TEST_HEADER include/spdk/assert.h 00:04:35.612 TEST_HEADER include/spdk/base64.h 00:04:35.612 TEST_HEADER include/spdk/barrier.h 00:04:35.612 TEST_HEADER include/spdk/bdev.h 00:04:35.612 TEST_HEADER include/spdk/bdev_module.h 00:04:35.612 TEST_HEADER include/spdk/bdev_zone.h 00:04:35.612 TEST_HEADER include/spdk/bit_array.h 00:04:35.612 TEST_HEADER include/spdk/bit_pool.h 00:04:35.612 TEST_HEADER include/spdk/blob_bdev.h 00:04:35.612 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:35.612 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:35.612 CC app/iscsi_tgt/iscsi_tgt.o 00:04:35.612 TEST_HEADER include/spdk/blobfs.h 00:04:35.612 TEST_HEADER include/spdk/blob.h 00:04:35.612 TEST_HEADER include/spdk/conf.h 00:04:35.612 TEST_HEADER include/spdk/crc16.h 00:04:35.612 TEST_HEADER include/spdk/cpuset.h 00:04:35.612 TEST_HEADER include/spdk/config.h 00:04:35.612 TEST_HEADER include/spdk/crc32.h 00:04:35.612 TEST_HEADER include/spdk/crc64.h 00:04:35.612 TEST_HEADER include/spdk/dif.h 00:04:35.612 TEST_HEADER include/spdk/dma.h 00:04:35.612 TEST_HEADER include/spdk/env_dpdk.h 00:04:35.612 TEST_HEADER include/spdk/endian.h 00:04:35.612 TEST_HEADER include/spdk/event.h 00:04:35.612 TEST_HEADER include/spdk/env.h 00:04:35.612 TEST_HEADER include/spdk/fd.h 00:04:35.612 TEST_HEADER include/spdk/file.h 00:04:35.612 TEST_HEADER include/spdk/fd_group.h 00:04:35.612 TEST_HEADER include/spdk/ftl.h 00:04:35.612 TEST_HEADER include/spdk/gpt_spec.h 00:04:35.612 TEST_HEADER include/spdk/hexlify.h 00:04:35.612 TEST_HEADER include/spdk/histogram_data.h 00:04:35.613 TEST_HEADER include/spdk/idxd.h 00:04:35.613 TEST_HEADER include/spdk/init.h 00:04:35.613 TEST_HEADER include/spdk/ioat_spec.h 00:04:35.613 TEST_HEADER include/spdk/idxd_spec.h 00:04:35.613 TEST_HEADER include/spdk/iscsi_spec.h 00:04:35.613 TEST_HEADER include/spdk/json.h 00:04:35.613 TEST_HEADER include/spdk/ioat.h 00:04:35.613 TEST_HEADER include/spdk/jsonrpc.h 00:04:35.613 TEST_HEADER include/spdk/keyring.h 00:04:35.613 TEST_HEADER include/spdk/likely.h 00:04:35.613 TEST_HEADER include/spdk/keyring_module.h 00:04:35.613 TEST_HEADER include/spdk/log.h 00:04:35.613 CC app/spdk_tgt/spdk_tgt.o 00:04:35.613 CC app/spdk_dd/spdk_dd.o 00:04:35.613 TEST_HEADER include/spdk/memory.h 00:04:35.613 TEST_HEADER include/spdk/lvol.h 00:04:35.613 TEST_HEADER include/spdk/nbd.h 00:04:35.613 TEST_HEADER include/spdk/net.h 00:04:35.613 TEST_HEADER include/spdk/mmio.h 00:04:35.613 TEST_HEADER include/spdk/notify.h 00:04:35.613 CC app/nvmf_tgt/nvmf_main.o 00:04:35.613 TEST_HEADER include/spdk/nvme.h 00:04:35.613 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:35.613 TEST_HEADER include/spdk/nvme_intel.h 00:04:35.613 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:35.613 TEST_HEADER include/spdk/nvme_spec.h 00:04:35.613 TEST_HEADER include/spdk/nvme_zns.h 00:04:35.613 TEST_HEADER include/spdk/nvmf.h 00:04:35.613 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:35.613 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:35.613 TEST_HEADER include/spdk/nvmf_spec.h 00:04:35.613 TEST_HEADER include/spdk/opal.h 00:04:35.613 TEST_HEADER include/spdk/nvmf_transport.h 00:04:35.613 TEST_HEADER include/spdk/pci_ids.h 00:04:35.613 TEST_HEADER include/spdk/opal_spec.h 00:04:35.613 TEST_HEADER include/spdk/reduce.h 00:04:35.613 TEST_HEADER include/spdk/queue.h 00:04:35.613 TEST_HEADER include/spdk/pipe.h 00:04:35.613 TEST_HEADER include/spdk/rpc.h 00:04:35.613 TEST_HEADER include/spdk/scheduler.h 00:04:35.613 TEST_HEADER include/spdk/scsi.h 00:04:35.613 TEST_HEADER include/spdk/scsi_spec.h 00:04:35.613 TEST_HEADER include/spdk/stdinc.h 00:04:35.613 TEST_HEADER include/spdk/sock.h 00:04:35.613 TEST_HEADER include/spdk/string.h 00:04:35.613 TEST_HEADER include/spdk/trace.h 00:04:35.613 TEST_HEADER include/spdk/thread.h 00:04:35.613 TEST_HEADER include/spdk/tree.h 00:04:35.613 TEST_HEADER include/spdk/trace_parser.h 00:04:35.613 TEST_HEADER include/spdk/ublk.h 00:04:35.613 TEST_HEADER include/spdk/util.h 00:04:35.613 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:35.613 TEST_HEADER include/spdk/version.h 00:04:35.613 TEST_HEADER include/spdk/uuid.h 00:04:35.613 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:35.613 TEST_HEADER include/spdk/vhost.h 00:04:35.613 TEST_HEADER include/spdk/vmd.h 00:04:35.613 TEST_HEADER include/spdk/xor.h 00:04:35.613 TEST_HEADER include/spdk/zipf.h 00:04:35.613 CXX test/cpp_headers/accel.o 00:04:35.613 CXX test/cpp_headers/assert.o 00:04:35.613 CXX test/cpp_headers/accel_module.o 00:04:35.613 CXX test/cpp_headers/base64.o 00:04:35.613 CXX test/cpp_headers/barrier.o 00:04:35.613 CXX test/cpp_headers/bdev.o 00:04:35.613 CXX test/cpp_headers/bdev_zone.o 00:04:35.613 CXX test/cpp_headers/bdev_module.o 00:04:35.613 CXX test/cpp_headers/bit_pool.o 00:04:35.613 CXX test/cpp_headers/blob_bdev.o 00:04:35.613 CXX test/cpp_headers/blobfs_bdev.o 00:04:35.613 CXX test/cpp_headers/bit_array.o 00:04:35.613 CXX test/cpp_headers/blobfs.o 00:04:35.613 CXX test/cpp_headers/conf.o 00:04:35.613 CXX test/cpp_headers/config.o 00:04:35.613 CXX test/cpp_headers/cpuset.o 00:04:35.613 CC examples/util/zipf/zipf.o 00:04:35.613 CXX test/cpp_headers/blob.o 00:04:35.613 CXX test/cpp_headers/crc16.o 00:04:35.613 CXX test/cpp_headers/crc32.o 00:04:35.613 CXX test/cpp_headers/crc64.o 00:04:35.613 CC test/app/jsoncat/jsoncat.o 00:04:35.613 CXX test/cpp_headers/dma.o 00:04:35.613 CXX test/cpp_headers/dif.o 00:04:35.613 CXX test/cpp_headers/env_dpdk.o 00:04:35.613 CXX test/cpp_headers/endian.o 00:04:35.613 CXX test/cpp_headers/env.o 00:04:35.613 CXX test/cpp_headers/fd_group.o 00:04:35.613 CXX test/cpp_headers/event.o 00:04:35.613 CXX test/cpp_headers/file.o 00:04:35.613 CXX test/cpp_headers/fd.o 00:04:35.888 CXX test/cpp_headers/hexlify.o 00:04:35.888 CXX test/cpp_headers/gpt_spec.o 00:04:35.888 CXX test/cpp_headers/histogram_data.o 00:04:35.888 CXX test/cpp_headers/ftl.o 00:04:35.888 CXX test/cpp_headers/init.o 00:04:35.888 CXX test/cpp_headers/idxd.o 00:04:35.888 CXX test/cpp_headers/idxd_spec.o 00:04:35.888 CXX test/cpp_headers/ioat.o 00:04:35.888 CXX test/cpp_headers/ioat_spec.o 00:04:35.888 CXX test/cpp_headers/iscsi_spec.o 00:04:35.888 CXX test/cpp_headers/json.o 00:04:35.888 CXX test/cpp_headers/jsonrpc.o 00:04:35.888 CXX test/cpp_headers/keyring_module.o 00:04:35.888 CXX test/cpp_headers/log.o 00:04:35.888 CXX test/cpp_headers/keyring.o 00:04:35.888 CXX test/cpp_headers/likely.o 00:04:35.888 CXX test/cpp_headers/lvol.o 00:04:35.888 CXX test/cpp_headers/memory.o 00:04:35.888 CXX test/cpp_headers/mmio.o 00:04:35.888 CXX test/cpp_headers/nbd.o 00:04:35.888 CXX test/cpp_headers/net.o 00:04:35.888 CXX test/cpp_headers/nvme.o 00:04:35.888 CC examples/ioat/perf/perf.o 00:04:35.888 CXX test/cpp_headers/nvme_ocssd.o 00:04:35.888 CXX test/cpp_headers/notify.o 00:04:35.888 CXX test/cpp_headers/nvme_intel.o 00:04:35.888 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:35.888 CXX test/cpp_headers/nvme_zns.o 00:04:35.888 CXX test/cpp_headers/nvme_spec.o 00:04:35.888 CXX test/cpp_headers/nvmf_cmd.o 00:04:35.888 CC test/thread/poller_perf/poller_perf.o 00:04:35.888 CC test/app/stub/stub.o 00:04:35.888 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:35.888 CXX test/cpp_headers/nvmf.o 00:04:35.888 CXX test/cpp_headers/nvmf_transport.o 00:04:35.888 CXX test/cpp_headers/nvmf_spec.o 00:04:35.888 CXX test/cpp_headers/opal.o 00:04:35.888 CC test/app/histogram_perf/histogram_perf.o 00:04:35.888 CXX test/cpp_headers/opal_spec.o 00:04:35.888 CC examples/ioat/verify/verify.o 00:04:35.888 CC test/app/bdev_svc/bdev_svc.o 00:04:35.888 CXX test/cpp_headers/pci_ids.o 00:04:35.888 CC test/env/pci/pci_ut.o 00:04:35.888 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:35.888 CC test/env/vtophys/vtophys.o 00:04:35.888 CC test/env/memory/memory_ut.o 00:04:35.888 CXX test/cpp_headers/pipe.o 00:04:35.888 CC app/fio/nvme/fio_plugin.o 00:04:35.888 CC test/dma/test_dma/test_dma.o 00:04:35.888 LINK spdk_lspci 00:04:35.888 CC app/fio/bdev/fio_plugin.o 00:04:35.888 LINK spdk_trace_record 00:04:35.888 LINK rpc_client_test 00:04:36.159 LINK interrupt_tgt 00:04:36.159 CC test/env/mem_callbacks/mem_callbacks.o 00:04:36.159 LINK iscsi_tgt 00:04:36.159 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:36.159 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:36.159 LINK nvmf_tgt 00:04:36.159 LINK spdk_nvme_discover 00:04:36.159 LINK zipf 00:04:36.417 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:36.417 LINK stub 00:04:36.417 LINK jsoncat 00:04:36.417 CXX test/cpp_headers/queue.o 00:04:36.417 CXX test/cpp_headers/reduce.o 00:04:36.417 CXX test/cpp_headers/rpc.o 00:04:36.417 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:36.417 CXX test/cpp_headers/scheduler.o 00:04:36.417 CXX test/cpp_headers/scsi.o 00:04:36.417 CXX test/cpp_headers/scsi_spec.o 00:04:36.417 CXX test/cpp_headers/sock.o 00:04:36.417 CXX test/cpp_headers/string.o 00:04:36.417 CXX test/cpp_headers/stdinc.o 00:04:36.417 CXX test/cpp_headers/thread.o 00:04:36.417 CXX test/cpp_headers/trace.o 00:04:36.417 CXX test/cpp_headers/trace_parser.o 00:04:36.417 LINK histogram_perf 00:04:36.417 CXX test/cpp_headers/tree.o 00:04:36.417 LINK poller_perf 00:04:36.417 CXX test/cpp_headers/ublk.o 00:04:36.417 CXX test/cpp_headers/util.o 00:04:36.417 CXX test/cpp_headers/uuid.o 00:04:36.417 CXX test/cpp_headers/version.o 00:04:36.417 CXX test/cpp_headers/vfio_user_pci.o 00:04:36.417 CXX test/cpp_headers/vfio_user_spec.o 00:04:36.417 CXX test/cpp_headers/vhost.o 00:04:36.417 CXX test/cpp_headers/vmd.o 00:04:36.417 CXX test/cpp_headers/xor.o 00:04:36.417 CXX test/cpp_headers/zipf.o 00:04:36.417 LINK vtophys 00:04:36.417 LINK spdk_tgt 00:04:36.417 LINK bdev_svc 00:04:36.417 LINK env_dpdk_post_init 00:04:36.417 LINK ioat_perf 00:04:36.418 LINK spdk_trace 00:04:36.418 LINK verify 00:04:36.677 LINK spdk_dd 00:04:36.677 LINK pci_ut 00:04:36.677 LINK test_dma 00:04:36.677 LINK nvme_fuzz 00:04:36.677 CC examples/vmd/led/led.o 00:04:36.677 LINK spdk_nvme_identify 00:04:36.677 LINK spdk_nvme 00:04:36.677 LINK spdk_nvme_perf 00:04:36.677 CC examples/vmd/lsvmd/lsvmd.o 00:04:36.677 CC examples/idxd/perf/perf.o 00:04:36.677 CC examples/sock/hello_world/hello_sock.o 00:04:36.936 LINK spdk_bdev 00:04:36.936 LINK mem_callbacks 00:04:36.936 CC app/vhost/vhost.o 00:04:36.936 CC examples/thread/thread/thread_ex.o 00:04:36.936 CC test/event/reactor_perf/reactor_perf.o 00:04:36.936 LINK vhost_fuzz 00:04:36.936 CC test/event/reactor/reactor.o 00:04:36.936 CC test/event/app_repeat/app_repeat.o 00:04:36.936 CC test/event/event_perf/event_perf.o 00:04:36.936 LINK led 00:04:36.936 CC test/event/scheduler/scheduler.o 00:04:36.936 LINK lsvmd 00:04:36.936 LINK spdk_top 00:04:36.936 LINK reactor_perf 00:04:36.936 LINK reactor 00:04:36.936 LINK vhost 00:04:36.936 LINK hello_sock 00:04:36.936 LINK event_perf 00:04:37.195 LINK idxd_perf 00:04:37.195 LINK app_repeat 00:04:37.195 CC test/nvme/e2edp/nvme_dp.o 00:04:37.195 LINK thread 00:04:37.195 CC test/nvme/overhead/overhead.o 00:04:37.195 CC test/nvme/fused_ordering/fused_ordering.o 00:04:37.195 CC test/nvme/simple_copy/simple_copy.o 00:04:37.195 CC test/nvme/sgl/sgl.o 00:04:37.195 CC test/nvme/startup/startup.o 00:04:37.195 CC test/nvme/cuse/cuse.o 00:04:37.195 CC test/nvme/connect_stress/connect_stress.o 00:04:37.195 CC test/nvme/fdp/fdp.o 00:04:37.195 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:37.195 CC test/nvme/reset/reset.o 00:04:37.195 CC test/nvme/reserve/reserve.o 00:04:37.195 CC test/nvme/aer/aer.o 00:04:37.195 CC test/nvme/compliance/nvme_compliance.o 00:04:37.195 CC test/nvme/err_injection/err_injection.o 00:04:37.195 CC test/nvme/boot_partition/boot_partition.o 00:04:37.195 CC test/accel/dif/dif.o 00:04:37.195 CC test/blobfs/mkfs/mkfs.o 00:04:37.195 LINK scheduler 00:04:37.195 LINK memory_ut 00:04:37.195 CC test/lvol/esnap/esnap.o 00:04:37.195 LINK startup 00:04:37.195 LINK boot_partition 00:04:37.195 LINK connect_stress 00:04:37.195 LINK err_injection 00:04:37.195 LINK doorbell_aers 00:04:37.195 LINK fused_ordering 00:04:37.195 LINK simple_copy 00:04:37.454 LINK reserve 00:04:37.454 LINK nvme_dp 00:04:37.454 LINK sgl 00:04:37.455 LINK mkfs 00:04:37.455 LINK overhead 00:04:37.455 LINK reset 00:04:37.455 LINK aer 00:04:37.455 LINK fdp 00:04:37.455 CC examples/nvme/hello_world/hello_world.o 00:04:37.455 LINK nvme_compliance 00:04:37.455 CC examples/nvme/hotplug/hotplug.o 00:04:37.455 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:37.455 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:37.455 CC examples/nvme/reconnect/reconnect.o 00:04:37.455 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:37.455 CC examples/nvme/arbitration/arbitration.o 00:04:37.455 CC examples/nvme/abort/abort.o 00:04:37.455 LINK dif 00:04:37.455 CC examples/accel/perf/accel_perf.o 00:04:37.455 CC examples/blob/hello_world/hello_blob.o 00:04:37.455 CC examples/blob/cli/blobcli.o 00:04:37.714 LINK pmr_persistence 00:04:37.714 LINK cmb_copy 00:04:37.714 LINK hello_world 00:04:37.714 LINK hotplug 00:04:37.714 LINK iscsi_fuzz 00:04:37.714 LINK arbitration 00:04:37.714 LINK reconnect 00:04:37.714 LINK abort 00:04:37.714 LINK hello_blob 00:04:37.714 LINK nvme_manage 00:04:37.974 LINK accel_perf 00:04:37.974 LINK blobcli 00:04:37.974 CC test/bdev/bdevio/bdevio.o 00:04:38.234 LINK cuse 00:04:38.234 LINK bdevio 00:04:38.493 CC examples/bdev/bdevperf/bdevperf.o 00:04:38.493 CC examples/bdev/hello_world/hello_bdev.o 00:04:38.493 LINK hello_bdev 00:04:39.060 LINK bdevperf 00:04:39.319 CC examples/nvmf/nvmf/nvmf.o 00:04:39.579 LINK nvmf 00:04:40.518 LINK esnap 00:04:41.088 00:04:41.088 real 0m43.816s 00:04:41.088 user 6m29.696s 00:04:41.088 sys 3m25.821s 00:04:41.088 10:53:00 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:41.088 10:53:00 make -- common/autotest_common.sh@10 -- $ set +x 00:04:41.088 ************************************ 00:04:41.088 END TEST make 00:04:41.088 ************************************ 00:04:41.088 10:53:00 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:41.088 10:53:00 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:41.088 10:53:00 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:41.088 10:53:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:41.088 10:53:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:41.088 10:53:00 -- pm/common@44 -- $ pid=1169098 00:04:41.088 10:53:00 -- pm/common@50 -- $ kill -TERM 1169098 00:04:41.088 10:53:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:41.088 10:53:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:41.088 10:53:00 -- pm/common@44 -- $ pid=1169099 00:04:41.088 10:53:00 -- pm/common@50 -- $ kill -TERM 1169099 00:04:41.088 10:53:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:41.088 10:53:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:41.088 10:53:00 -- pm/common@44 -- $ pid=1169101 00:04:41.088 10:53:00 -- pm/common@50 -- $ kill -TERM 1169101 00:04:41.088 10:53:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:41.088 10:53:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:41.088 10:53:00 -- pm/common@44 -- $ pid=1169128 00:04:41.088 10:53:00 -- pm/common@50 -- $ sudo -E kill -TERM 1169128 00:04:41.088 10:53:00 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:41.088 10:53:00 -- nvmf/common.sh@7 -- # uname -s 00:04:41.088 10:53:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:41.088 10:53:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:41.088 10:53:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:41.088 10:53:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:41.088 10:53:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:41.088 10:53:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:41.088 10:53:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:41.088 10:53:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:41.088 10:53:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:41.088 10:53:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:41.088 10:53:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:41.088 10:53:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:41.088 10:53:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:41.088 10:53:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:41.088 10:53:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:41.088 10:53:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:41.088 10:53:00 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:41.088 10:53:00 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:41.088 10:53:00 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:41.088 10:53:00 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:41.088 10:53:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.088 10:53:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.088 10:53:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.088 10:53:00 -- paths/export.sh@5 -- # export PATH 00:04:41.089 10:53:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.089 10:53:00 -- nvmf/common.sh@47 -- # : 0 00:04:41.089 10:53:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:41.089 10:53:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:41.089 10:53:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:41.089 10:53:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:41.089 10:53:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:41.089 10:53:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:41.089 10:53:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:41.089 10:53:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:41.089 10:53:00 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:41.089 10:53:00 -- spdk/autotest.sh@32 -- # uname -s 00:04:41.089 10:53:00 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:41.089 10:53:00 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:41.089 10:53:00 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:41.089 10:53:00 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:41.089 10:53:00 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:41.089 10:53:00 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:41.089 10:53:00 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:41.089 10:53:00 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:41.089 10:53:00 -- spdk/autotest.sh@48 -- # udevadm_pid=1227913 00:04:41.089 10:53:00 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:41.089 10:53:00 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:41.089 10:53:00 -- pm/common@17 -- # local monitor 00:04:41.089 10:53:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:41.089 10:53:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:41.089 10:53:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:41.089 10:53:00 -- pm/common@21 -- # date +%s 00:04:41.089 10:53:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:41.089 10:53:00 -- pm/common@21 -- # date +%s 00:04:41.089 10:53:00 -- pm/common@25 -- # sleep 1 00:04:41.089 10:53:00 -- pm/common@21 -- # date +%s 00:04:41.089 10:53:00 -- pm/common@21 -- # date +%s 00:04:41.089 10:53:00 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721983980 00:04:41.089 10:53:00 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721983980 00:04:41.089 10:53:00 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721983980 00:04:41.089 10:53:00 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721983980 00:04:41.089 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721983980_collect-vmstat.pm.log 00:04:41.089 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721983980_collect-cpu-load.pm.log 00:04:41.089 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721983980_collect-cpu-temp.pm.log 00:04:41.349 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721983980_collect-bmc-pm.bmc.pm.log 00:04:42.289 10:53:01 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:42.289 10:53:01 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:42.289 10:53:01 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:42.289 10:53:01 -- common/autotest_common.sh@10 -- # set +x 00:04:42.289 10:53:01 -- spdk/autotest.sh@59 -- # create_test_list 00:04:42.289 10:53:01 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:42.289 10:53:01 -- common/autotest_common.sh@10 -- # set +x 00:04:42.289 10:53:01 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:42.289 10:53:01 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:42.289 10:53:01 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:42.289 10:53:01 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:42.289 10:53:01 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:42.289 10:53:01 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:42.289 10:53:01 -- common/autotest_common.sh@1455 -- # uname 00:04:42.289 10:53:01 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:42.289 10:53:01 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:42.289 10:53:01 -- common/autotest_common.sh@1475 -- # uname 00:04:42.289 10:53:01 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:42.289 10:53:01 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:42.289 10:53:01 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:42.289 10:53:01 -- spdk/autotest.sh@72 -- # hash lcov 00:04:42.289 10:53:01 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:42.289 10:53:01 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:42.289 --rc lcov_branch_coverage=1 00:04:42.289 --rc lcov_function_coverage=1 00:04:42.289 --rc genhtml_branch_coverage=1 00:04:42.289 --rc genhtml_function_coverage=1 00:04:42.289 --rc genhtml_legend=1 00:04:42.289 --rc geninfo_all_blocks=1 00:04:42.289 ' 00:04:42.289 10:53:01 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:42.289 --rc lcov_branch_coverage=1 00:04:42.289 --rc lcov_function_coverage=1 00:04:42.289 --rc genhtml_branch_coverage=1 00:04:42.289 --rc genhtml_function_coverage=1 00:04:42.289 --rc genhtml_legend=1 00:04:42.289 --rc geninfo_all_blocks=1 00:04:42.289 ' 00:04:42.289 10:53:01 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:42.289 --rc lcov_branch_coverage=1 00:04:42.289 --rc lcov_function_coverage=1 00:04:42.289 --rc genhtml_branch_coverage=1 00:04:42.289 --rc genhtml_function_coverage=1 00:04:42.289 --rc genhtml_legend=1 00:04:42.289 --rc geninfo_all_blocks=1 00:04:42.289 --no-external' 00:04:42.289 10:53:01 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:42.289 --rc lcov_branch_coverage=1 00:04:42.289 --rc lcov_function_coverage=1 00:04:42.289 --rc genhtml_branch_coverage=1 00:04:42.289 --rc genhtml_function_coverage=1 00:04:42.289 --rc genhtml_legend=1 00:04:42.289 --rc geninfo_all_blocks=1 00:04:42.289 --no-external' 00:04:42.289 10:53:01 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:42.289 lcov: LCOV version 1.14 00:04:42.289 10:53:01 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:54.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:54.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:04.578 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:04.578 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:05:04.578 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:04.578 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:05:04.578 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:04.578 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:05:04.578 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:04.578 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:05:04.578 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:04.578 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:05:04.578 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:04.578 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:05:04.578 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:04.578 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:05:04.578 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:04.578 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:05:04.578 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:04.578 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:05:04.578 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:04.578 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:05:04.578 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:04.578 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:05:04.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:04.579 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:04.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:04.580 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:04.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:04.580 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:04.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:04.580 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:05:04.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:04.580 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:04.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:04.580 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:05:04.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:04.580 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:05:04.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:04.580 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:05:04.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:04.580 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:05:04.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:04.580 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:05:04.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:04.580 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:05:04.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:04.580 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:05:04.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:04.580 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:05:04.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:04.580 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:05:04.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:04.580 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:05:04.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:04.580 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:05:04.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:04.580 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:05:04.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:05:04.580 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:05:04.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:04.580 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:05:04.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:04.580 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:05:04.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:04.580 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:05:04.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:04.580 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:05:04.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:04.580 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:05:04.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:04.580 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:05:04.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:05:04.580 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:05:04.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:04.580 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:05:04.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:05:04.580 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:05:04.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:04.580 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:04.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:04.580 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:04.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:04.580 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:05:04.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:04.580 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:05:04.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:04.580 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:05:04.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:04.580 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:05:06.492 10:53:25 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:06.492 10:53:25 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:06.492 10:53:25 -- common/autotest_common.sh@10 -- # set +x 00:05:06.492 10:53:25 -- spdk/autotest.sh@91 -- # rm -f 00:05:06.492 10:53:25 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:09.038 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:05:09.038 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:05:09.038 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:05:09.038 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:05:09.038 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:05:09.038 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:05:09.038 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:05:09.038 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:05:09.038 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:05:09.038 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:05:09.038 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:05:09.038 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:05:09.038 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:05:09.038 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:05:09.038 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:05:09.299 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:05:09.299 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:05:09.299 10:53:28 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:09.299 10:53:28 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:09.299 10:53:28 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:09.299 10:53:28 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:09.299 10:53:28 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:09.299 10:53:28 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:09.299 10:53:28 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:09.299 10:53:28 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:09.299 10:53:28 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:09.299 10:53:28 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:09.299 10:53:28 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:09.299 10:53:28 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:09.299 10:53:28 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:09.299 10:53:28 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:09.299 10:53:28 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:09.299 No valid GPT data, bailing 00:05:09.299 10:53:28 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:09.299 10:53:28 -- scripts/common.sh@391 -- # pt= 00:05:09.299 10:53:28 -- scripts/common.sh@392 -- # return 1 00:05:09.299 10:53:28 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:09.299 1+0 records in 00:05:09.299 1+0 records out 00:05:09.299 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00222027 s, 472 MB/s 00:05:09.299 10:53:28 -- spdk/autotest.sh@118 -- # sync 00:05:09.299 10:53:28 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:09.299 10:53:28 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:09.299 10:53:28 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:14.584 10:53:33 -- spdk/autotest.sh@124 -- # uname -s 00:05:14.584 10:53:33 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:14.584 10:53:33 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:05:14.584 10:53:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.584 10:53:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.584 10:53:33 -- common/autotest_common.sh@10 -- # set +x 00:05:14.584 ************************************ 00:05:14.584 START TEST setup.sh 00:05:14.584 ************************************ 00:05:14.584 10:53:33 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:05:14.584 * Looking for test storage... 00:05:14.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:14.584 10:53:33 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:05:14.584 10:53:33 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:14.584 10:53:33 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:05:14.584 10:53:33 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.584 10:53:33 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.584 10:53:33 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:14.584 ************************************ 00:05:14.584 START TEST acl 00:05:14.584 ************************************ 00:05:14.584 10:53:33 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:05:14.584 * Looking for test storage... 00:05:14.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:14.584 10:53:34 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:05:14.584 10:53:34 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:14.584 10:53:34 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:14.584 10:53:34 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:14.584 10:53:34 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:14.584 10:53:34 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:14.584 10:53:34 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:14.584 10:53:34 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:14.584 10:53:34 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:14.584 10:53:34 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:05:14.584 10:53:34 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:05:14.584 10:53:34 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:05:14.584 10:53:34 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:05:14.584 10:53:34 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:05:14.584 10:53:34 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:14.584 10:53:34 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:17.879 10:53:37 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:17.879 10:53:37 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:17.879 10:53:37 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:17.879 10:53:37 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:17.879 10:53:37 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:17.879 10:53:37 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:20.479 Hugepages 00:05:20.479 node hugesize free / total 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:20.479 00:05:20.479 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:05:20.479 10:53:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:20.480 10:53:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:20.480 10:53:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:20.480 10:53:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:05:20.480 10:53:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:20.480 10:53:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:20.480 10:53:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:20.480 10:53:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:05:20.480 10:53:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:20.480 10:53:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:20.480 10:53:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:20.480 10:53:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:05:20.480 10:53:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:20.480 10:53:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:20.480 10:53:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:20.480 10:53:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:05:20.480 10:53:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:20.480 10:53:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:20.480 10:53:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:20.480 10:53:39 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:05:20.480 10:53:39 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:20.480 10:53:39 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.480 10:53:39 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.480 10:53:39 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:20.480 ************************************ 00:05:20.480 START TEST denied 00:05:20.480 ************************************ 00:05:20.480 10:53:39 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:05:20.480 10:53:39 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:05:20.480 10:53:39 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:20.480 10:53:39 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:05:20.480 10:53:39 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.480 10:53:39 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:23.069 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:05:23.069 10:53:42 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:05:23.069 10:53:42 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:23.069 10:53:42 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:23.069 10:53:42 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:05:23.069 10:53:42 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:05:23.069 10:53:42 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:23.069 10:53:42 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:23.069 10:53:42 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:23.069 10:53:42 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:23.069 10:53:42 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:27.281 00:05:27.281 real 0m6.520s 00:05:27.281 user 0m2.153s 00:05:27.281 sys 0m3.723s 00:05:27.281 10:53:46 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.281 10:53:46 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:27.281 ************************************ 00:05:27.281 END TEST denied 00:05:27.281 ************************************ 00:05:27.281 10:53:46 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:27.281 10:53:46 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.281 10:53:46 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.281 10:53:46 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:27.281 ************************************ 00:05:27.281 START TEST allowed 00:05:27.281 ************************************ 00:05:27.281 10:53:46 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:05:27.281 10:53:46 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:05:27.281 10:53:46 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:27.281 10:53:46 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:05:27.281 10:53:46 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:27.281 10:53:46 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:30.579 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:30.579 10:53:49 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:05:30.579 10:53:49 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:30.579 10:53:49 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:30.579 10:53:49 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:30.579 10:53:49 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:33.877 00:05:33.877 real 0m6.569s 00:05:33.877 user 0m1.964s 00:05:33.877 sys 0m3.737s 00:05:33.877 10:53:52 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.877 10:53:52 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:33.877 ************************************ 00:05:33.877 END TEST allowed 00:05:33.877 ************************************ 00:05:33.877 00:05:33.877 real 0m19.041s 00:05:33.877 user 0m6.369s 00:05:33.877 sys 0m11.352s 00:05:33.877 10:53:53 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.877 10:53:53 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:33.877 ************************************ 00:05:33.877 END TEST acl 00:05:33.877 ************************************ 00:05:33.877 10:53:53 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:05:33.877 10:53:53 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.877 10:53:53 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.877 10:53:53 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:33.877 ************************************ 00:05:33.877 START TEST hugepages 00:05:33.877 ************************************ 00:05:33.877 10:53:53 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:05:33.877 * Looking for test storage... 00:05:33.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:33.877 10:53:53 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:33.877 10:53:53 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:33.877 10:53:53 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:33.877 10:53:53 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:33.877 10:53:53 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:33.877 10:53:53 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:33.877 10:53:53 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:33.877 10:53:53 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 168231444 kB' 'MemAvailable: 171467676 kB' 'Buffers: 3896 kB' 'Cached: 14751924 kB' 'SwapCached: 0 kB' 'Active: 11618660 kB' 'Inactive: 3694312 kB' 'Active(anon): 11200704 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560356 kB' 'Mapped: 220588 kB' 'Shmem: 10643552 kB' 'KReclaimable: 536360 kB' 'Slab: 1193840 kB' 'SReclaimable: 536360 kB' 'SUnreclaim: 657480 kB' 'KernelStack: 20864 kB' 'PageTables: 9584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982020 kB' 'Committed_AS: 12750172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317320 kB' 'VmallocChunk: 0 kB' 'Percpu: 118656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.878 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:33.879 10:53:53 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:33.879 10:53:53 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.879 10:53:53 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.879 10:53:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:33.880 ************************************ 00:05:33.880 START TEST default_setup 00:05:33.880 ************************************ 00:05:33.880 10:53:53 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:05:33.880 10:53:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:33.880 10:53:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:33.880 10:53:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:33.880 10:53:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:33.880 10:53:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:33.880 10:53:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:33.880 10:53:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:33.880 10:53:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:33.880 10:53:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:33.880 10:53:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:33.880 10:53:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:33.880 10:53:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:33.880 10:53:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:33.880 10:53:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:33.880 10:53:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:33.880 10:53:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:33.880 10:53:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:33.880 10:53:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:33.880 10:53:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:33.880 10:53:53 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:33.880 10:53:53 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:33.880 10:53:53 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:36.419 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:36.419 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:36.419 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:36.419 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:36.419 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:36.419 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:36.419 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:36.419 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:36.419 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:36.419 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:36.419 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:36.419 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:36.419 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:36.419 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:36.419 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:36.419 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:37.809 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170359420 kB' 'MemAvailable: 173595620 kB' 'Buffers: 3896 kB' 'Cached: 14752032 kB' 'SwapCached: 0 kB' 'Active: 11634724 kB' 'Inactive: 3694312 kB' 'Active(anon): 11216768 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576012 kB' 'Mapped: 220600 kB' 'Shmem: 10643660 kB' 'KReclaimable: 536296 kB' 'Slab: 1192692 kB' 'SReclaimable: 536296 kB' 'SUnreclaim: 656396 kB' 'KernelStack: 20608 kB' 'PageTables: 9212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12770236 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317224 kB' 'VmallocChunk: 0 kB' 'Percpu: 118656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.809 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:37.810 10:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170359840 kB' 'MemAvailable: 173596040 kB' 'Buffers: 3896 kB' 'Cached: 14752036 kB' 'SwapCached: 0 kB' 'Active: 11634416 kB' 'Inactive: 3694312 kB' 'Active(anon): 11216460 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575676 kB' 'Mapped: 220600 kB' 'Shmem: 10643664 kB' 'KReclaimable: 536296 kB' 'Slab: 1192692 kB' 'SReclaimable: 536296 kB' 'SUnreclaim: 656396 kB' 'KernelStack: 20640 kB' 'PageTables: 9268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12770256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317224 kB' 'VmallocChunk: 0 kB' 'Percpu: 118656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.811 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:37.812 10:53:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:37.812 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:37.812 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:37.812 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:37.812 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:37.812 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.812 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:37.812 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:37.812 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170359976 kB' 'MemAvailable: 173596176 kB' 'Buffers: 3896 kB' 'Cached: 14752052 kB' 'SwapCached: 0 kB' 'Active: 11633992 kB' 'Inactive: 3694312 kB' 'Active(anon): 11216036 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575744 kB' 'Mapped: 220452 kB' 'Shmem: 10643680 kB' 'KReclaimable: 536296 kB' 'Slab: 1192644 kB' 'SReclaimable: 536296 kB' 'SUnreclaim: 656348 kB' 'KernelStack: 20624 kB' 'PageTables: 9200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12771400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317208 kB' 'VmallocChunk: 0 kB' 'Percpu: 118656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.813 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.814 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:37.815 nr_hugepages=1024 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:37.815 resv_hugepages=0 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:37.815 surplus_hugepages=0 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:37.815 anon_hugepages=0 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170360280 kB' 'MemAvailable: 173596480 kB' 'Buffers: 3896 kB' 'Cached: 14752076 kB' 'SwapCached: 0 kB' 'Active: 11633984 kB' 'Inactive: 3694312 kB' 'Active(anon): 11216028 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575676 kB' 'Mapped: 220460 kB' 'Shmem: 10643704 kB' 'KReclaimable: 536296 kB' 'Slab: 1192644 kB' 'SReclaimable: 536296 kB' 'SUnreclaim: 656348 kB' 'KernelStack: 20736 kB' 'PageTables: 9168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12772916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317320 kB' 'VmallocChunk: 0 kB' 'Percpu: 118656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.815 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:37.816 10:53:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 91676816 kB' 'MemUsed: 5938812 kB' 'SwapCached: 0 kB' 'Active: 1807608 kB' 'Inactive: 216924 kB' 'Active(anon): 1645784 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 216924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1860308 kB' 'Mapped: 86620 kB' 'AnonPages: 167496 kB' 'Shmem: 1481560 kB' 'KernelStack: 11944 kB' 'PageTables: 3368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 347432 kB' 'Slab: 666896 kB' 'SReclaimable: 347432 kB' 'SUnreclaim: 319464 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.817 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:37.818 node0=1024 expecting 1024 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:37.818 00:05:37.818 real 0m3.869s 00:05:37.818 user 0m1.206s 00:05:37.818 sys 0m1.778s 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:37.818 10:53:57 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:37.818 ************************************ 00:05:37.818 END TEST default_setup 00:05:37.818 ************************************ 00:05:37.818 10:53:57 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:37.818 10:53:57 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:37.818 10:53:57 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.818 10:53:57 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:37.818 ************************************ 00:05:37.818 START TEST per_node_1G_alloc 00:05:37.818 ************************************ 00:05:37.818 10:53:57 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:05:37.818 10:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:37.818 10:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:05:37.818 10:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:37.818 10:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:05:37.818 10:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:37.818 10:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:05:37.818 10:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:37.818 10:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:37.818 10:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:37.818 10:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:05:37.818 10:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:05:37.818 10:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:37.818 10:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:37.818 10:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:37.818 10:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:37.818 10:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:37.818 10:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:05:37.818 10:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:37.818 10:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:37.818 10:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:37.818 10:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:37.818 10:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:37.818 10:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:37.818 10:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:05:37.819 10:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:37.819 10:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:37.819 10:53:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:40.365 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:40.365 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:40.365 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:40.365 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:40.365 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:40.365 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:40.365 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:40.365 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:40.365 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:40.365 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:40.365 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:40.365 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:40.365 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:40.365 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:40.365 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:40.365 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:40.365 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:40.365 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:05:40.365 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:40.365 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:40.365 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:40.365 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:40.365 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170348420 kB' 'MemAvailable: 173584620 kB' 'Buffers: 3896 kB' 'Cached: 14752168 kB' 'SwapCached: 0 kB' 'Active: 11635232 kB' 'Inactive: 3694312 kB' 'Active(anon): 11217276 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576708 kB' 'Mapped: 220524 kB' 'Shmem: 10643796 kB' 'KReclaimable: 536296 kB' 'Slab: 1192472 kB' 'SReclaimable: 536296 kB' 'SUnreclaim: 656176 kB' 'KernelStack: 20736 kB' 'PageTables: 9180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12771884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317320 kB' 'VmallocChunk: 0 kB' 'Percpu: 118656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.366 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170351248 kB' 'MemAvailable: 173587448 kB' 'Buffers: 3896 kB' 'Cached: 14752172 kB' 'SwapCached: 0 kB' 'Active: 11636844 kB' 'Inactive: 3694312 kB' 'Active(anon): 11218888 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577992 kB' 'Mapped: 220472 kB' 'Shmem: 10643800 kB' 'KReclaimable: 536296 kB' 'Slab: 1192428 kB' 'SReclaimable: 536296 kB' 'SUnreclaim: 656132 kB' 'KernelStack: 20928 kB' 'PageTables: 9984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12773396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317400 kB' 'VmallocChunk: 0 kB' 'Percpu: 118656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.367 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.368 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.634 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.634 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.634 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.634 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.634 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.634 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.634 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.634 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.634 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.634 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.634 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.634 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.634 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.634 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.634 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.634 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.634 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.634 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.634 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.634 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.634 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.634 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.634 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.634 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.634 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.634 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.634 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.635 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170354896 kB' 'MemAvailable: 173591096 kB' 'Buffers: 3896 kB' 'Cached: 14752172 kB' 'SwapCached: 0 kB' 'Active: 11637448 kB' 'Inactive: 3694312 kB' 'Active(anon): 11219492 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578628 kB' 'Mapped: 220492 kB' 'Shmem: 10643800 kB' 'KReclaimable: 536296 kB' 'Slab: 1192572 kB' 'SReclaimable: 536296 kB' 'SUnreclaim: 656276 kB' 'KernelStack: 21296 kB' 'PageTables: 10964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12773420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317432 kB' 'VmallocChunk: 0 kB' 'Percpu: 118656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.636 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.637 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:40.638 nr_hugepages=1024 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:40.638 resv_hugepages=0 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:40.638 surplus_hugepages=0 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:40.638 anon_hugepages=0 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170357264 kB' 'MemAvailable: 173593464 kB' 'Buffers: 3896 kB' 'Cached: 14752208 kB' 'SwapCached: 0 kB' 'Active: 11635460 kB' 'Inactive: 3694312 kB' 'Active(anon): 11217504 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576964 kB' 'Mapped: 220492 kB' 'Shmem: 10643836 kB' 'KReclaimable: 536296 kB' 'Slab: 1192796 kB' 'SReclaimable: 536296 kB' 'SUnreclaim: 656500 kB' 'KernelStack: 21088 kB' 'PageTables: 10752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12771948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317496 kB' 'VmallocChunk: 0 kB' 'Percpu: 118656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.638 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.639 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92714052 kB' 'MemUsed: 4901576 kB' 'SwapCached: 0 kB' 'Active: 1810248 kB' 'Inactive: 216924 kB' 'Active(anon): 1648424 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 216924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1860412 kB' 'Mapped: 86644 kB' 'AnonPages: 169688 kB' 'Shmem: 1481664 kB' 'KernelStack: 12744 kB' 'PageTables: 5324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 347432 kB' 'Slab: 666980 kB' 'SReclaimable: 347432 kB' 'SUnreclaim: 319548 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.640 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.641 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765508 kB' 'MemFree: 77641432 kB' 'MemUsed: 16124076 kB' 'SwapCached: 0 kB' 'Active: 9827040 kB' 'Inactive: 3477388 kB' 'Active(anon): 9570908 kB' 'Inactive(anon): 0 kB' 'Active(file): 256132 kB' 'Inactive(file): 3477388 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12895736 kB' 'Mapped: 133848 kB' 'AnonPages: 408736 kB' 'Shmem: 9162216 kB' 'KernelStack: 8696 kB' 'PageTables: 5888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 188864 kB' 'Slab: 525720 kB' 'SReclaimable: 188864 kB' 'SUnreclaim: 336856 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.642 10:53:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.642 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.642 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.642 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.642 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.642 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:40.643 node0=512 expecting 512 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:40.643 node1=512 expecting 512 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:40.643 00:05:40.643 real 0m2.826s 00:05:40.643 user 0m1.131s 00:05:40.643 sys 0m1.764s 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.643 10:54:00 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:40.643 ************************************ 00:05:40.643 END TEST per_node_1G_alloc 00:05:40.643 ************************************ 00:05:40.643 10:54:00 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:40.643 10:54:00 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.643 10:54:00 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.643 10:54:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:40.643 ************************************ 00:05:40.643 START TEST even_2G_alloc 00:05:40.643 ************************************ 00:05:40.643 10:54:00 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:05:40.643 10:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:40.643 10:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:40.643 10:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:40.643 10:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:40.643 10:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:40.643 10:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:40.643 10:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:40.643 10:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:40.643 10:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:40.643 10:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:40.643 10:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:40.643 10:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:40.643 10:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:40.643 10:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:40.643 10:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:40.643 10:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:40.643 10:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:05:40.643 10:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:40.643 10:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:40.643 10:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:40.643 10:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:40.643 10:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:40.643 10:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:40.643 10:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:40.643 10:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:40.643 10:54:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:40.643 10:54:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:40.643 10:54:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:43.189 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:43.189 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:43.189 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:43.189 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:43.189 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:43.453 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:43.453 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:43.453 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:43.453 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:43.453 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:43.453 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:43.453 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:43.453 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:43.453 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:43.453 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:43.453 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:43.453 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:43.453 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:43.453 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:43.453 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:43.453 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170352116 kB' 'MemAvailable: 173588316 kB' 'Buffers: 3896 kB' 'Cached: 14752328 kB' 'SwapCached: 0 kB' 'Active: 11640328 kB' 'Inactive: 3694312 kB' 'Active(anon): 11222372 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 581612 kB' 'Mapped: 220400 kB' 'Shmem: 10643956 kB' 'KReclaimable: 536296 kB' 'Slab: 1192216 kB' 'SReclaimable: 536296 kB' 'SUnreclaim: 655920 kB' 'KernelStack: 20912 kB' 'PageTables: 10328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12758764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317388 kB' 'VmallocChunk: 0 kB' 'Percpu: 118656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.454 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.455 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170353424 kB' 'MemAvailable: 173589624 kB' 'Buffers: 3896 kB' 'Cached: 14752332 kB' 'SwapCached: 0 kB' 'Active: 11639176 kB' 'Inactive: 3694312 kB' 'Active(anon): 11221220 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580516 kB' 'Mapped: 220360 kB' 'Shmem: 10643960 kB' 'KReclaimable: 536296 kB' 'Slab: 1192252 kB' 'SReclaimable: 536296 kB' 'SUnreclaim: 655956 kB' 'KernelStack: 20688 kB' 'PageTables: 9368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12756164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317196 kB' 'VmallocChunk: 0 kB' 'Percpu: 118656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.456 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.457 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170353172 kB' 'MemAvailable: 173589372 kB' 'Buffers: 3896 kB' 'Cached: 14752348 kB' 'SwapCached: 0 kB' 'Active: 11639140 kB' 'Inactive: 3694312 kB' 'Active(anon): 11221184 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580492 kB' 'Mapped: 220348 kB' 'Shmem: 10643976 kB' 'KReclaimable: 536296 kB' 'Slab: 1192124 kB' 'SReclaimable: 536296 kB' 'SUnreclaim: 655828 kB' 'KernelStack: 20688 kB' 'PageTables: 9596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12756184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317196 kB' 'VmallocChunk: 0 kB' 'Percpu: 118656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.458 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.459 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:43.460 nr_hugepages=1024 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:43.460 resv_hugepages=0 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:43.460 surplus_hugepages=0 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:43.460 anon_hugepages=0 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170352920 kB' 'MemAvailable: 173589120 kB' 'Buffers: 3896 kB' 'Cached: 14752372 kB' 'SwapCached: 0 kB' 'Active: 11639196 kB' 'Inactive: 3694312 kB' 'Active(anon): 11221240 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580492 kB' 'Mapped: 220348 kB' 'Shmem: 10644000 kB' 'KReclaimable: 536296 kB' 'Slab: 1192124 kB' 'SReclaimable: 536296 kB' 'SUnreclaim: 655828 kB' 'KernelStack: 20688 kB' 'PageTables: 9596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12756208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317196 kB' 'VmallocChunk: 0 kB' 'Percpu: 118656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.460 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.461 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92681872 kB' 'MemUsed: 4933756 kB' 'SwapCached: 0 kB' 'Active: 1811720 kB' 'Inactive: 216924 kB' 'Active(anon): 1649896 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 216924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1860408 kB' 'Mapped: 86448 kB' 'AnonPages: 171368 kB' 'Shmem: 1481660 kB' 'KernelStack: 11960 kB' 'PageTables: 3644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 347432 kB' 'Slab: 666588 kB' 'SReclaimable: 347432 kB' 'SUnreclaim: 319156 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.462 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:43.463 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.464 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:43.464 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:43.464 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.464 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.464 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765508 kB' 'MemFree: 77669788 kB' 'MemUsed: 16095720 kB' 'SwapCached: 0 kB' 'Active: 9827548 kB' 'Inactive: 3477388 kB' 'Active(anon): 9571416 kB' 'Inactive(anon): 0 kB' 'Active(file): 256132 kB' 'Inactive(file): 3477388 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12895900 kB' 'Mapped: 133900 kB' 'AnonPages: 409132 kB' 'Shmem: 9162380 kB' 'KernelStack: 8728 kB' 'PageTables: 5952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 188864 kB' 'Slab: 525536 kB' 'SReclaimable: 188864 kB' 'SUnreclaim: 336672 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.726 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:43.727 node0=512 expecting 512 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:05:43.727 node1=512 expecting 512 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:43.727 00:05:43.727 real 0m2.901s 00:05:43.727 user 0m1.192s 00:05:43.727 sys 0m1.778s 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.727 10:54:02 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:43.727 ************************************ 00:05:43.727 END TEST even_2G_alloc 00:05:43.727 ************************************ 00:05:43.727 10:54:03 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:43.727 10:54:03 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.727 10:54:03 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.727 10:54:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:43.727 ************************************ 00:05:43.727 START TEST odd_alloc 00:05:43.727 ************************************ 00:05:43.727 10:54:03 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:05:43.727 10:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:43.727 10:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:43.727 10:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:43.727 10:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:43.727 10:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:43.727 10:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:43.727 10:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:43.727 10:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:43.727 10:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:43.727 10:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:43.727 10:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:43.727 10:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:43.727 10:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:43.727 10:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:43.727 10:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:43.727 10:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:43.727 10:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:05:43.727 10:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:43.727 10:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:43.727 10:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:05:43.727 10:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:43.727 10:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:43.727 10:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:43.727 10:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:43.727 10:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:43.727 10:54:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:43.727 10:54:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:43.728 10:54:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:46.274 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:46.274 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:46.274 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:46.274 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:46.274 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:46.274 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:46.274 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:46.274 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:46.274 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:46.274 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:46.274 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:46.274 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:46.274 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:46.274 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:46.274 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:46.274 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:46.274 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170378928 kB' 'MemAvailable: 173615128 kB' 'Buffers: 3896 kB' 'Cached: 14752468 kB' 'SwapCached: 0 kB' 'Active: 11638876 kB' 'Inactive: 3694312 kB' 'Active(anon): 11220920 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580148 kB' 'Mapped: 220624 kB' 'Shmem: 10644096 kB' 'KReclaimable: 536296 kB' 'Slab: 1191220 kB' 'SReclaimable: 536296 kB' 'SUnreclaim: 654924 kB' 'KernelStack: 20608 kB' 'PageTables: 9004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029572 kB' 'Committed_AS: 12756624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317180 kB' 'VmallocChunk: 0 kB' 'Percpu: 118656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.274 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.275 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170380076 kB' 'MemAvailable: 173616276 kB' 'Buffers: 3896 kB' 'Cached: 14752472 kB' 'SwapCached: 0 kB' 'Active: 11639328 kB' 'Inactive: 3694312 kB' 'Active(anon): 11221372 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580516 kB' 'Mapped: 220512 kB' 'Shmem: 10644100 kB' 'KReclaimable: 536296 kB' 'Slab: 1191300 kB' 'SReclaimable: 536296 kB' 'SUnreclaim: 655004 kB' 'KernelStack: 20576 kB' 'PageTables: 8860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029572 kB' 'Committed_AS: 12756648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317164 kB' 'VmallocChunk: 0 kB' 'Percpu: 118656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.276 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.277 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170380580 kB' 'MemAvailable: 173616780 kB' 'Buffers: 3896 kB' 'Cached: 14752500 kB' 'SwapCached: 0 kB' 'Active: 11639300 kB' 'Inactive: 3694312 kB' 'Active(anon): 11221344 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580528 kB' 'Mapped: 220364 kB' 'Shmem: 10644128 kB' 'KReclaimable: 536296 kB' 'Slab: 1191308 kB' 'SReclaimable: 536296 kB' 'SUnreclaim: 655012 kB' 'KernelStack: 20656 kB' 'PageTables: 9176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029572 kB' 'Committed_AS: 12758132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317164 kB' 'VmallocChunk: 0 kB' 'Percpu: 118656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.278 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.279 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.280 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.280 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.280 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.280 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.280 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.280 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.280 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:46.280 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:46.544 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:46.544 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:46.544 nr_hugepages=1025 00:05:46.544 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:46.544 resv_hugepages=0 00:05:46.544 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:46.544 surplus_hugepages=0 00:05:46.544 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:46.544 anon_hugepages=0 00:05:46.544 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:46.544 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:46.544 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:46.544 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:46.544 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:46.544 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:46.544 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:46.544 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.544 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.544 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.544 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.544 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.544 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.544 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.544 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170377212 kB' 'MemAvailable: 173613412 kB' 'Buffers: 3896 kB' 'Cached: 14752520 kB' 'SwapCached: 0 kB' 'Active: 11643392 kB' 'Inactive: 3694312 kB' 'Active(anon): 11225436 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 585116 kB' 'Mapped: 220364 kB' 'Shmem: 10644148 kB' 'KReclaimable: 536296 kB' 'Slab: 1191308 kB' 'SReclaimable: 536296 kB' 'SUnreclaim: 655012 kB' 'KernelStack: 20640 kB' 'PageTables: 9112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029572 kB' 'Committed_AS: 12762244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317116 kB' 'VmallocChunk: 0 kB' 'Percpu: 118656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:05:46.544 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.544 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.544 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.544 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.544 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.544 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.544 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.544 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.544 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.544 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.544 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.544 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.544 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.544 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.544 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.544 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.544 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.544 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.544 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.544 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.544 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.544 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.544 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.544 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.545 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92712632 kB' 'MemUsed: 4902996 kB' 'SwapCached: 0 kB' 'Active: 1817504 kB' 'Inactive: 216924 kB' 'Active(anon): 1655680 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 216924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1860428 kB' 'Mapped: 86460 kB' 'AnonPages: 177292 kB' 'Shmem: 1481680 kB' 'KernelStack: 11944 kB' 'PageTables: 3332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 347432 kB' 'Slab: 665920 kB' 'SReclaimable: 347432 kB' 'SUnreclaim: 318488 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.546 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.547 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765508 kB' 'MemFree: 77661052 kB' 'MemUsed: 16104456 kB' 'SwapCached: 0 kB' 'Active: 9827372 kB' 'Inactive: 3477388 kB' 'Active(anon): 9571240 kB' 'Inactive(anon): 0 kB' 'Active(file): 256132 kB' 'Inactive(file): 3477388 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12895988 kB' 'Mapped: 133932 kB' 'AnonPages: 408852 kB' 'Shmem: 9162468 kB' 'KernelStack: 8696 kB' 'PageTables: 5804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 188864 kB' 'Slab: 525388 kB' 'SReclaimable: 188864 kB' 'SUnreclaim: 336524 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.548 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:05:46.549 node0=512 expecting 513 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:46.549 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:46.550 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:46.550 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:05:46.550 node1=513 expecting 512 00:05:46.550 10:54:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:05:46.550 00:05:46.550 real 0m2.810s 00:05:46.550 user 0m1.097s 00:05:46.550 sys 0m1.758s 00:05:46.550 10:54:05 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.550 10:54:05 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:46.550 ************************************ 00:05:46.550 END TEST odd_alloc 00:05:46.550 ************************************ 00:05:46.550 10:54:05 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:46.550 10:54:05 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.550 10:54:05 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.550 10:54:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:46.550 ************************************ 00:05:46.550 START TEST custom_alloc 00:05:46.550 ************************************ 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:46.550 10:54:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:49.094 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:49.094 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:49.094 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:49.094 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:49.094 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:49.094 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:49.094 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:49.094 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:49.094 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:49.094 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:49.094 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:49.094 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:49.094 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:49.094 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:49.094 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:49.094 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:49.094 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:49.094 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:05:49.094 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:49.094 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:49.094 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:49.094 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:49.094 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:49.094 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:49.094 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:49.094 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:49.094 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:49.094 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:49.094 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:49.094 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:49.094 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:49.094 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:49.094 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:49.094 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:49.094 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:49.094 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169313212 kB' 'MemAvailable: 172549412 kB' 'Buffers: 3896 kB' 'Cached: 14752628 kB' 'SwapCached: 0 kB' 'Active: 11638888 kB' 'Inactive: 3694312 kB' 'Active(anon): 11220932 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 579980 kB' 'Mapped: 220060 kB' 'Shmem: 10644256 kB' 'KReclaimable: 536296 kB' 'Slab: 1192272 kB' 'SReclaimable: 536296 kB' 'SUnreclaim: 655976 kB' 'KernelStack: 20640 kB' 'PageTables: 9060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506308 kB' 'Committed_AS: 12756328 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317288 kB' 'VmallocChunk: 0 kB' 'Percpu: 118656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.095 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169312932 kB' 'MemAvailable: 172549132 kB' 'Buffers: 3896 kB' 'Cached: 14752632 kB' 'SwapCached: 0 kB' 'Active: 11640936 kB' 'Inactive: 3694312 kB' 'Active(anon): 11222980 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 581952 kB' 'Mapped: 220356 kB' 'Shmem: 10644260 kB' 'KReclaimable: 536296 kB' 'Slab: 1192264 kB' 'SReclaimable: 536296 kB' 'SUnreclaim: 655968 kB' 'KernelStack: 20592 kB' 'PageTables: 8908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506308 kB' 'Committed_AS: 12758208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317244 kB' 'VmallocChunk: 0 kB' 'Percpu: 118656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.096 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.097 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169313412 kB' 'MemAvailable: 172549612 kB' 'Buffers: 3896 kB' 'Cached: 14752644 kB' 'SwapCached: 0 kB' 'Active: 11640372 kB' 'Inactive: 3694312 kB' 'Active(anon): 11222416 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 581376 kB' 'Mapped: 220364 kB' 'Shmem: 10644272 kB' 'KReclaimable: 536296 kB' 'Slab: 1192328 kB' 'SReclaimable: 536296 kB' 'SUnreclaim: 656032 kB' 'KernelStack: 20624 kB' 'PageTables: 9048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506308 kB' 'Committed_AS: 12758228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317244 kB' 'VmallocChunk: 0 kB' 'Percpu: 118656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.098 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.363 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:49.364 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:05:49.364 nr_hugepages=1536 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:49.365 resv_hugepages=0 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:49.365 surplus_hugepages=0 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:49.365 anon_hugepages=0 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169312892 kB' 'MemAvailable: 172549092 kB' 'Buffers: 3896 kB' 'Cached: 14752672 kB' 'SwapCached: 0 kB' 'Active: 11640400 kB' 'Inactive: 3694312 kB' 'Active(anon): 11222444 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 581372 kB' 'Mapped: 220364 kB' 'Shmem: 10644300 kB' 'KReclaimable: 536296 kB' 'Slab: 1192328 kB' 'SReclaimable: 536296 kB' 'SUnreclaim: 656032 kB' 'KernelStack: 20624 kB' 'PageTables: 9048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506308 kB' 'Committed_AS: 12758248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317260 kB' 'VmallocChunk: 0 kB' 'Percpu: 118656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.365 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:49.366 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92697752 kB' 'MemUsed: 4917876 kB' 'SwapCached: 0 kB' 'Active: 1806828 kB' 'Inactive: 216924 kB' 'Active(anon): 1645004 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 216924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1860588 kB' 'Mapped: 86316 kB' 'AnonPages: 166320 kB' 'Shmem: 1481840 kB' 'KernelStack: 11928 kB' 'PageTables: 3184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 347432 kB' 'Slab: 666684 kB' 'SReclaimable: 347432 kB' 'SUnreclaim: 319252 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.367 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765508 kB' 'MemFree: 76616912 kB' 'MemUsed: 17148596 kB' 'SwapCached: 0 kB' 'Active: 9830076 kB' 'Inactive: 3477388 kB' 'Active(anon): 9573944 kB' 'Inactive(anon): 0 kB' 'Active(file): 256132 kB' 'Inactive(file): 3477388 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12896000 kB' 'Mapped: 133180 kB' 'AnonPages: 411904 kB' 'Shmem: 9162480 kB' 'KernelStack: 8920 kB' 'PageTables: 6164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 188864 kB' 'Slab: 525644 kB' 'SReclaimable: 188864 kB' 'SUnreclaim: 336780 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.368 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.369 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.370 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.370 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.370 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.370 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.370 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.370 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.370 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.370 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.370 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.370 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:49.370 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:49.370 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:49.370 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:49.370 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:49.370 10:54:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:49.370 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:49.370 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:49.370 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:49.370 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:49.370 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:49.370 node0=512 expecting 512 00:05:49.370 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:49.370 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:49.370 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:49.370 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:05:49.370 node1=1024 expecting 1024 00:05:49.370 10:54:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:05:49.370 00:05:49.370 real 0m2.799s 00:05:49.370 user 0m1.194s 00:05:49.370 sys 0m1.674s 00:05:49.370 10:54:08 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.370 10:54:08 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:49.370 ************************************ 00:05:49.370 END TEST custom_alloc 00:05:49.370 ************************************ 00:05:49.370 10:54:08 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:49.370 10:54:08 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.370 10:54:08 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.370 10:54:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:49.370 ************************************ 00:05:49.370 START TEST no_shrink_alloc 00:05:49.370 ************************************ 00:05:49.370 10:54:08 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:05:49.370 10:54:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:49.370 10:54:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:49.370 10:54:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:49.370 10:54:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:49.370 10:54:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:49.370 10:54:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:49.370 10:54:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:49.370 10:54:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:49.370 10:54:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:49.370 10:54:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:49.370 10:54:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:49.370 10:54:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:49.370 10:54:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:49.370 10:54:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:49.370 10:54:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:49.370 10:54:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:49.370 10:54:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:49.370 10:54:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:49.370 10:54:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:49.370 10:54:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:49.370 10:54:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:49.370 10:54:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:51.912 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:51.912 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:51.912 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:51.912 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:51.912 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:51.912 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:51.912 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:51.912 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:51.912 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:51.912 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:51.912 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:51.912 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:51.912 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:51.912 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:51.912 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:52.177 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:52.177 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:52.177 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:52.177 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:52.177 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:52.177 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:52.177 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:52.177 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:52.177 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:52.177 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:52.177 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:52.177 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:52.177 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:52.177 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:52.177 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:52.177 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:52.177 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:52.177 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:52.177 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:52.177 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:52.177 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.177 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.177 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170348072 kB' 'MemAvailable: 173584272 kB' 'Buffers: 3896 kB' 'Cached: 14752784 kB' 'SwapCached: 0 kB' 'Active: 11632496 kB' 'Inactive: 3694312 kB' 'Active(anon): 11214540 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573424 kB' 'Mapped: 219532 kB' 'Shmem: 10644412 kB' 'KReclaimable: 536296 kB' 'Slab: 1192488 kB' 'SReclaimable: 536296 kB' 'SUnreclaim: 656192 kB' 'KernelStack: 20576 kB' 'PageTables: 8892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12748920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317208 kB' 'VmallocChunk: 0 kB' 'Percpu: 118656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:05:52.177 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.177 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.177 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.177 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.177 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.177 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.177 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.177 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.177 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.177 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.177 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.177 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.177 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.177 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.177 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.177 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.177 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.177 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.177 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.177 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.178 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170353348 kB' 'MemAvailable: 173589548 kB' 'Buffers: 3896 kB' 'Cached: 14752784 kB' 'SwapCached: 0 kB' 'Active: 11632160 kB' 'Inactive: 3694312 kB' 'Active(anon): 11214204 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573152 kB' 'Mapped: 219516 kB' 'Shmem: 10644412 kB' 'KReclaimable: 536296 kB' 'Slab: 1192488 kB' 'SReclaimable: 536296 kB' 'SUnreclaim: 656192 kB' 'KernelStack: 20560 kB' 'PageTables: 8836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12748936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317176 kB' 'VmallocChunk: 0 kB' 'Percpu: 118656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.179 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.180 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170353804 kB' 'MemAvailable: 173590004 kB' 'Buffers: 3896 kB' 'Cached: 14752804 kB' 'SwapCached: 0 kB' 'Active: 11632156 kB' 'Inactive: 3694312 kB' 'Active(anon): 11214200 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573136 kB' 'Mapped: 219516 kB' 'Shmem: 10644432 kB' 'KReclaimable: 536296 kB' 'Slab: 1192604 kB' 'SReclaimable: 536296 kB' 'SUnreclaim: 656308 kB' 'KernelStack: 20560 kB' 'PageTables: 8848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12748960 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317176 kB' 'VmallocChunk: 0 kB' 'Percpu: 118656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.181 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.182 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:52.183 nr_hugepages=1024 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:52.183 resv_hugepages=0 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:52.183 surplus_hugepages=0 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:52.183 anon_hugepages=0 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170353804 kB' 'MemAvailable: 173590004 kB' 'Buffers: 3896 kB' 'Cached: 14752844 kB' 'SwapCached: 0 kB' 'Active: 11631876 kB' 'Inactive: 3694312 kB' 'Active(anon): 11213920 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572732 kB' 'Mapped: 219516 kB' 'Shmem: 10644472 kB' 'KReclaimable: 536296 kB' 'Slab: 1192604 kB' 'SReclaimable: 536296 kB' 'SUnreclaim: 656308 kB' 'KernelStack: 20544 kB' 'PageTables: 8792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12748980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317176 kB' 'VmallocChunk: 0 kB' 'Percpu: 118656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.183 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.184 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 91642780 kB' 'MemUsed: 5972848 kB' 'SwapCached: 0 kB' 'Active: 1804992 kB' 'Inactive: 216924 kB' 'Active(anon): 1643168 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 216924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1860752 kB' 'Mapped: 86336 kB' 'AnonPages: 164404 kB' 'Shmem: 1482004 kB' 'KernelStack: 11864 kB' 'PageTables: 3052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 347432 kB' 'Slab: 666920 kB' 'SReclaimable: 347432 kB' 'SUnreclaim: 319488 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.185 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:52.186 node0=1024 expecting 1024 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:52.186 10:54:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:54.724 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:54.724 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:54.724 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:54.724 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:54.724 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:54.724 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:54.724 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:54.724 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:54.724 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:54.725 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:54.725 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:54.725 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:54.725 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:54.725 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:54.725 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:54.725 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:54.725 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:54.725 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:54.990 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:54.990 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:54.990 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:54.990 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:54.990 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:54.990 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:54.990 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:54.990 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:54.990 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:54.990 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:54.990 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:54.990 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:54.990 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:54.990 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:54.990 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:54.990 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:54.990 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:54.990 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170369244 kB' 'MemAvailable: 173605444 kB' 'Buffers: 3896 kB' 'Cached: 14752908 kB' 'SwapCached: 0 kB' 'Active: 11633324 kB' 'Inactive: 3694312 kB' 'Active(anon): 11215368 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574088 kB' 'Mapped: 219588 kB' 'Shmem: 10644536 kB' 'KReclaimable: 536296 kB' 'Slab: 1191688 kB' 'SReclaimable: 536296 kB' 'SUnreclaim: 655392 kB' 'KernelStack: 20592 kB' 'PageTables: 8888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12749788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317304 kB' 'VmallocChunk: 0 kB' 'Percpu: 118656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.991 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170370168 kB' 'MemAvailable: 173606368 kB' 'Buffers: 3896 kB' 'Cached: 14752908 kB' 'SwapCached: 0 kB' 'Active: 11632672 kB' 'Inactive: 3694312 kB' 'Active(anon): 11214716 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573432 kB' 'Mapped: 219520 kB' 'Shmem: 10644536 kB' 'KReclaimable: 536296 kB' 'Slab: 1191736 kB' 'SReclaimable: 536296 kB' 'SUnreclaim: 655440 kB' 'KernelStack: 20576 kB' 'PageTables: 8848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12749804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317272 kB' 'VmallocChunk: 0 kB' 'Percpu: 118656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.992 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.993 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170370168 kB' 'MemAvailable: 173606368 kB' 'Buffers: 3896 kB' 'Cached: 14752908 kB' 'SwapCached: 0 kB' 'Active: 11632672 kB' 'Inactive: 3694312 kB' 'Active(anon): 11214716 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573432 kB' 'Mapped: 219520 kB' 'Shmem: 10644536 kB' 'KReclaimable: 536296 kB' 'Slab: 1191736 kB' 'SReclaimable: 536296 kB' 'SUnreclaim: 655440 kB' 'KernelStack: 20576 kB' 'PageTables: 8848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12749828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317272 kB' 'VmallocChunk: 0 kB' 'Percpu: 118656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.994 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.995 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:54.996 nr_hugepages=1024 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:54.996 resv_hugepages=0 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:54.996 surplus_hugepages=0 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:54.996 anon_hugepages=0 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170370992 kB' 'MemAvailable: 173607192 kB' 'Buffers: 3896 kB' 'Cached: 14752912 kB' 'SwapCached: 0 kB' 'Active: 11632836 kB' 'Inactive: 3694312 kB' 'Active(anon): 11214880 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573592 kB' 'Mapped: 219520 kB' 'Shmem: 10644540 kB' 'KReclaimable: 536296 kB' 'Slab: 1191736 kB' 'SReclaimable: 536296 kB' 'SUnreclaim: 655440 kB' 'KernelStack: 20560 kB' 'PageTables: 8792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12749848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317272 kB' 'VmallocChunk: 0 kB' 'Percpu: 118656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3951572 kB' 'DirectMap2M: 33476608 kB' 'DirectMap1G: 164626432 kB' 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.996 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.997 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 91639384 kB' 'MemUsed: 5976244 kB' 'SwapCached: 0 kB' 'Active: 1805132 kB' 'Inactive: 216924 kB' 'Active(anon): 1643308 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 216924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1860856 kB' 'Mapped: 86340 kB' 'AnonPages: 164392 kB' 'Shmem: 1482108 kB' 'KernelStack: 11864 kB' 'PageTables: 3044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 347432 kB' 'Slab: 666156 kB' 'SReclaimable: 347432 kB' 'SUnreclaim: 318724 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.998 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:54.999 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.000 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.000 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.000 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:55.000 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.000 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.000 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.000 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:55.000 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.000 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.000 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.000 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:55.000 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:55.000 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:55.000 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:55.000 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:55.000 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:55.000 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:55.000 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:55.000 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:55.000 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:55.000 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:55.000 node0=1024 expecting 1024 00:05:55.000 10:54:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:55.000 00:05:55.000 real 0m5.622s 00:05:55.000 user 0m2.240s 00:05:55.000 sys 0m3.516s 00:05:55.000 10:54:14 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.000 10:54:14 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:55.000 ************************************ 00:05:55.000 END TEST no_shrink_alloc 00:05:55.000 ************************************ 00:05:55.000 10:54:14 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:55.000 10:54:14 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:55.000 10:54:14 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:55.000 10:54:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:55.000 10:54:14 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:55.000 10:54:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:55.000 10:54:14 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:55.000 10:54:14 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:55.000 10:54:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:55.000 10:54:14 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:55.000 10:54:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:55.000 10:54:14 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:55.000 10:54:14 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:55.000 10:54:14 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:55.000 00:05:55.000 real 0m21.374s 00:05:55.000 user 0m8.281s 00:05:55.000 sys 0m12.631s 00:05:55.000 10:54:14 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.000 10:54:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:55.000 ************************************ 00:05:55.000 END TEST hugepages 00:05:55.000 ************************************ 00:05:55.000 10:54:14 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:55.000 10:54:14 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.000 10:54:14 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.000 10:54:14 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:55.261 ************************************ 00:05:55.261 START TEST driver 00:05:55.261 ************************************ 00:05:55.261 10:54:14 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:55.261 * Looking for test storage... 00:05:55.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:55.261 10:54:14 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:55.261 10:54:14 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:55.261 10:54:14 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:59.512 10:54:18 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:59.512 10:54:18 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.512 10:54:18 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.512 10:54:18 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:59.512 ************************************ 00:05:59.512 START TEST guess_driver 00:05:59.512 ************************************ 00:05:59.512 10:54:18 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:05:59.512 10:54:18 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:59.512 10:54:18 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:59.512 10:54:18 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:59.512 10:54:18 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:59.512 10:54:18 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:59.512 10:54:18 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:59.512 10:54:18 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:59.513 10:54:18 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:59.513 10:54:18 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:59.513 10:54:18 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:05:59.513 10:54:18 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:59.513 10:54:18 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:59.513 10:54:18 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:59.513 10:54:18 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:59.513 10:54:18 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:59.513 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:59.513 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:59.513 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:59.513 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:59.513 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:59.513 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:59.513 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:59.513 10:54:18 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:59.513 10:54:18 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:59.513 10:54:18 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:59.513 10:54:18 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:59.513 10:54:18 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:59.513 Looking for driver=vfio-pci 00:05:59.513 10:54:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:59.513 10:54:18 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:59.513 10:54:18 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:59.513 10:54:18 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:01.425 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:01.425 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:01.425 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:01.425 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:01.425 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:01.425 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:01.425 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:01.425 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:01.425 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:01.735 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:01.735 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:01.735 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:01.735 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:01.735 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:01.735 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:01.735 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:01.735 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:01.735 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:01.735 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:01.735 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:01.735 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:01.735 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:01.735 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:01.735 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:01.735 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:01.735 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:01.735 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:01.735 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:01.735 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:01.735 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:01.735 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:01.735 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:01.735 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:01.735 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:01.735 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:01.735 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:01.735 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:01.735 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:01.735 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:01.735 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:01.735 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:01.735 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:01.735 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:01.735 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:01.735 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:01.735 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:01.735 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:01.735 10:54:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:02.698 10:54:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:02.698 10:54:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:02.698 10:54:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:02.698 10:54:21 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:06:02.698 10:54:21 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:06:02.698 10:54:21 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:02.698 10:54:21 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:06.902 00:06:06.902 real 0m7.331s 00:06:06.902 user 0m2.065s 00:06:06.902 sys 0m3.652s 00:06:06.902 10:54:25 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.902 10:54:25 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:06:06.902 ************************************ 00:06:06.902 END TEST guess_driver 00:06:06.902 ************************************ 00:06:06.902 00:06:06.902 real 0m11.169s 00:06:06.902 user 0m3.128s 00:06:06.902 sys 0m5.715s 00:06:06.902 10:54:25 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.902 10:54:25 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:06:06.902 ************************************ 00:06:06.902 END TEST driver 00:06:06.902 ************************************ 00:06:06.902 10:54:25 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:06:06.902 10:54:25 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.902 10:54:25 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.902 10:54:25 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:06.902 ************************************ 00:06:06.902 START TEST devices 00:06:06.902 ************************************ 00:06:06.902 10:54:25 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:06:06.902 * Looking for test storage... 00:06:06.902 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:06:06.902 10:54:25 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:06:06.902 10:54:25 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:06:06.902 10:54:25 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:06.902 10:54:25 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:09.446 10:54:28 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:06:09.446 10:54:28 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:06:09.446 10:54:28 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:06:09.446 10:54:28 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:06:09.446 10:54:28 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:09.446 10:54:28 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:06:09.446 10:54:28 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:06:09.446 10:54:28 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:09.446 10:54:28 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:09.446 10:54:28 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:06:09.446 10:54:28 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:06:09.446 10:54:28 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:06:09.446 10:54:28 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:06:09.446 10:54:28 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:06:09.446 10:54:28 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:09.446 10:54:28 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:06:09.446 10:54:28 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:06:09.446 10:54:28 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:06:09.446 10:54:28 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:06:09.446 10:54:28 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:06:09.446 10:54:28 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:06:09.446 10:54:28 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:06:09.446 No valid GPT data, bailing 00:06:09.446 10:54:28 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:09.446 10:54:28 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:09.446 10:54:28 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:09.446 10:54:28 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:06:09.446 10:54:28 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:09.446 10:54:28 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:09.446 10:54:28 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:06:09.446 10:54:28 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:06:09.446 10:54:28 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:09.446 10:54:28 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:06:09.446 10:54:28 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:06:09.446 10:54:28 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:06:09.446 10:54:28 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:06:09.446 10:54:28 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.446 10:54:28 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.446 10:54:28 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:09.446 ************************************ 00:06:09.446 START TEST nvme_mount 00:06:09.446 ************************************ 00:06:09.446 10:54:28 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:06:09.446 10:54:28 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:06:09.446 10:54:28 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:06:09.446 10:54:28 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:09.446 10:54:28 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:09.446 10:54:28 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:06:09.446 10:54:28 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:09.446 10:54:28 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:06:09.446 10:54:28 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:09.446 10:54:28 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:09.446 10:54:28 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:06:09.446 10:54:28 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:06:09.446 10:54:28 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:09.446 10:54:28 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:09.446 10:54:28 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:09.446 10:54:28 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:09.446 10:54:28 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:09.446 10:54:28 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:06:09.446 10:54:28 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:09.446 10:54:28 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:06:10.830 Creating new GPT entries in memory. 00:06:10.830 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:10.830 other utilities. 00:06:10.830 10:54:29 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:10.830 10:54:29 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:10.830 10:54:29 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:10.830 10:54:29 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:10.830 10:54:29 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:06:11.771 Creating new GPT entries in memory. 00:06:11.771 The operation has completed successfully. 00:06:11.771 10:54:30 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:11.771 10:54:30 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:11.771 10:54:30 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1259940 00:06:11.771 10:54:30 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:11.771 10:54:30 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:06:11.771 10:54:30 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:11.771 10:54:30 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:06:11.771 10:54:30 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:06:11.771 10:54:31 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:11.771 10:54:31 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:11.771 10:54:31 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:06:11.771 10:54:31 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:06:11.771 10:54:31 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:11.771 10:54:31 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:11.771 10:54:31 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:11.771 10:54:31 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:11.771 10:54:31 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:11.771 10:54:31 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:11.771 10:54:31 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:06:11.771 10:54:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:11.771 10:54:31 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:11.771 10:54:31 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:11.771 10:54:31 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:14.314 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:14.314 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:14.574 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:06:14.574 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:06:14.574 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:14.574 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:14.574 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:06:14.574 10:54:33 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:06:14.574 10:54:33 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:14.574 10:54:33 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:06:14.574 10:54:33 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:06:14.574 10:54:33 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:14.574 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:14.574 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:06:14.574 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:06:14.574 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:14.574 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:14.574 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:14.574 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:14.574 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:14.574 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:14.574 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:14.574 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:06:14.574 10:54:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:14.574 10:54:33 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:14.574 10:54:33 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:17.119 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:17.119 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:06:17.119 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:17.119 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.119 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:17.119 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.119 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:17.119 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.119 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:17.119 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.119 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:17.119 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.119 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:17.119 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.119 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:17.119 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.119 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:17.119 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.119 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:17.119 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.119 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:17.119 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.119 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:17.119 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.119 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:17.119 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.119 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:17.119 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.119 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:17.119 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.119 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:17.119 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.119 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:17.119 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.119 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:17.119 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.119 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:17.119 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:06:17.119 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:17.119 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:17.119 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:17.120 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:17.120 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:06:17.120 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:06:17.120 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:06:17.120 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:17.120 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:06:17.120 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:17.120 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:17.120 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:17.120 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:17.120 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:06:17.120 10:54:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:17.120 10:54:36 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:17.120 10:54:36 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:19.662 10:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:19.662 10:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:06:19.662 10:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:19.662 10:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.662 10:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:19.662 10:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.662 10:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:19.662 10:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.662 10:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:19.662 10:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.662 10:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:19.662 10:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.662 10:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:19.662 10:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.662 10:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:19.662 10:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.662 10:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:19.662 10:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.662 10:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:19.662 10:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.662 10:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:19.662 10:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.662 10:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:19.662 10:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.662 10:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:19.662 10:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.662 10:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:19.662 10:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.662 10:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:19.662 10:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.662 10:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:19.662 10:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.662 10:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:19.662 10:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.662 10:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:19.662 10:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:19.922 10:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:19.922 10:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:19.922 10:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:06:19.922 10:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:06:19.922 10:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:19.922 10:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:19.922 10:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:19.922 10:54:39 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:19.922 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:19.922 00:06:19.922 real 0m10.364s 00:06:19.922 user 0m3.044s 00:06:19.922 sys 0m5.139s 00:06:19.922 10:54:39 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.922 10:54:39 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:06:19.922 ************************************ 00:06:19.922 END TEST nvme_mount 00:06:19.922 ************************************ 00:06:19.922 10:54:39 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:06:19.922 10:54:39 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.922 10:54:39 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.922 10:54:39 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:19.922 ************************************ 00:06:19.922 START TEST dm_mount 00:06:19.922 ************************************ 00:06:19.922 10:54:39 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:06:19.922 10:54:39 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:06:19.922 10:54:39 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:06:19.922 10:54:39 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:06:19.922 10:54:39 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:06:19.922 10:54:39 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:19.922 10:54:39 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:06:19.922 10:54:39 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:19.922 10:54:39 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:19.922 10:54:39 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:06:19.922 10:54:39 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:06:19.922 10:54:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:19.922 10:54:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:19.922 10:54:39 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:19.922 10:54:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:19.922 10:54:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:19.922 10:54:39 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:19.922 10:54:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:19.922 10:54:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:19.922 10:54:39 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:06:19.922 10:54:39 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:19.922 10:54:39 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:06:20.861 Creating new GPT entries in memory. 00:06:20.861 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:20.861 other utilities. 00:06:20.861 10:54:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:21.121 10:54:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:21.121 10:54:40 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:21.121 10:54:40 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:21.121 10:54:40 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:06:22.059 Creating new GPT entries in memory. 00:06:22.059 The operation has completed successfully. 00:06:22.059 10:54:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:22.059 10:54:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:22.059 10:54:41 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:22.059 10:54:41 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:22.059 10:54:41 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:06:23.001 The operation has completed successfully. 00:06:23.001 10:54:42 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:23.001 10:54:42 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:23.001 10:54:42 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1264103 00:06:23.001 10:54:42 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:06:23.001 10:54:42 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:23.001 10:54:42 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:23.001 10:54:42 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:06:23.001 10:54:42 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:06:23.001 10:54:42 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:23.001 10:54:42 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:06:23.001 10:54:42 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:23.001 10:54:42 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:06:23.001 10:54:42 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:06:23.001 10:54:42 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:06:23.001 10:54:42 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:06:23.001 10:54:42 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:06:23.001 10:54:42 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:23.001 10:54:42 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:06:23.001 10:54:42 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:23.001 10:54:42 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:23.001 10:54:42 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:06:23.261 10:54:42 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:23.261 10:54:42 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:23.261 10:54:42 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:06:23.261 10:54:42 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:06:23.261 10:54:42 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:23.261 10:54:42 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:23.261 10:54:42 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:23.261 10:54:42 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:06:23.261 10:54:42 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:06:23.261 10:54:42 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:23.261 10:54:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.261 10:54:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:06:23.261 10:54:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:23.261 10:54:42 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:23.261 10:54:42 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:25.850 10:54:45 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:28.390 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:28.390 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:06:28.390 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:28.390 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.390 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:28.390 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.390 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:28.390 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.390 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:28.390 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.390 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:28.390 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.390 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:28.390 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.390 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:28.390 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.390 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:28.391 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.391 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:28.391 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.391 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:28.391 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.391 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:28.391 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.391 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:28.391 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.391 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:28.391 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.391 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:28.391 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.391 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:28.391 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.391 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:28.391 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.391 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:06:28.391 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.391 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:28.391 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:28.391 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:06:28.391 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:06:28.391 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:28.391 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:28.391 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:06:28.391 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:28.391 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:06:28.391 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:28.391 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:28.391 10:54:47 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:06:28.391 00:06:28.391 real 0m8.469s 00:06:28.391 user 0m2.086s 00:06:28.391 sys 0m3.434s 00:06:28.391 10:54:47 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.391 10:54:47 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:06:28.391 ************************************ 00:06:28.391 END TEST dm_mount 00:06:28.391 ************************************ 00:06:28.391 10:54:47 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:06:28.391 10:54:47 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:06:28.391 10:54:47 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:28.391 10:54:47 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:28.391 10:54:47 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:28.391 10:54:47 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:28.391 10:54:47 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:28.651 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:06:28.651 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:06:28.651 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:28.651 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:28.651 10:54:48 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:06:28.651 10:54:48 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:28.651 10:54:48 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:28.651 10:54:48 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:28.651 10:54:48 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:28.651 10:54:48 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:06:28.651 10:54:48 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:06:28.651 00:06:28.651 real 0m22.376s 00:06:28.651 user 0m6.399s 00:06:28.651 sys 0m10.705s 00:06:28.651 10:54:48 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.651 10:54:48 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:28.651 ************************************ 00:06:28.651 END TEST devices 00:06:28.651 ************************************ 00:06:28.911 00:06:28.911 real 1m14.318s 00:06:28.911 user 0m24.317s 00:06:28.911 sys 0m40.649s 00:06:28.911 10:54:48 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.911 10:54:48 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:28.911 ************************************ 00:06:28.911 END TEST setup.sh 00:06:28.911 ************************************ 00:06:28.911 10:54:48 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:06:31.452 Hugepages 00:06:31.452 node hugesize free / total 00:06:31.452 node0 1048576kB 0 / 0 00:06:31.452 node0 2048kB 2048 / 2048 00:06:31.452 node1 1048576kB 0 / 0 00:06:31.452 node1 2048kB 0 / 0 00:06:31.452 00:06:31.452 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:31.452 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:06:31.453 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:06:31.453 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:06:31.453 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:06:31.453 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:06:31.453 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:06:31.453 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:06:31.453 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:06:31.712 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:06:31.712 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:06:31.712 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:06:31.712 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:06:31.712 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:06:31.712 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:06:31.712 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:06:31.712 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:06:31.712 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:06:31.712 10:54:51 -- spdk/autotest.sh@130 -- # uname -s 00:06:31.712 10:54:51 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:06:31.712 10:54:51 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:06:31.712 10:54:51 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:34.255 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:34.255 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:34.255 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:34.255 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:34.255 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:34.255 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:34.255 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:34.255 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:34.255 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:34.255 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:34.255 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:34.255 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:34.255 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:34.255 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:34.255 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:34.255 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:35.194 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:06:35.194 10:54:54 -- common/autotest_common.sh@1532 -- # sleep 1 00:06:36.577 10:54:55 -- common/autotest_common.sh@1533 -- # bdfs=() 00:06:36.577 10:54:55 -- common/autotest_common.sh@1533 -- # local bdfs 00:06:36.577 10:54:55 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:06:36.577 10:54:55 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:06:36.577 10:54:55 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:36.577 10:54:55 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:36.577 10:54:55 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:36.577 10:54:55 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:36.577 10:54:55 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:36.577 10:54:55 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:06:36.577 10:54:55 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:06:36.577 10:54:55 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:39.121 Waiting for block devices as requested 00:06:39.121 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:06:39.121 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:06:39.121 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:06:39.121 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:06:39.381 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:06:39.381 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:06:39.381 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:06:39.381 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:06:39.642 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:06:39.642 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:06:39.642 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:06:39.902 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:06:39.902 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:06:39.902 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:06:39.902 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:06:40.163 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:06:40.163 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:06:40.163 10:54:59 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:40.163 10:54:59 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:06:40.163 10:54:59 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:06:40.163 10:54:59 -- common/autotest_common.sh@1502 -- # grep 0000:5e:00.0/nvme/nvme 00:06:40.163 10:54:59 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:06:40.163 10:54:59 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:06:40.163 10:54:59 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:06:40.163 10:54:59 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:06:40.163 10:54:59 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:06:40.163 10:54:59 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:06:40.163 10:54:59 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:06:40.163 10:54:59 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:40.163 10:54:59 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:40.163 10:54:59 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:06:40.163 10:54:59 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:40.163 10:54:59 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:40.163 10:54:59 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:06:40.163 10:54:59 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:40.163 10:54:59 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:40.163 10:54:59 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:40.163 10:54:59 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:40.163 10:54:59 -- common/autotest_common.sh@1557 -- # continue 00:06:40.163 10:54:59 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:40.163 10:54:59 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:40.163 10:54:59 -- common/autotest_common.sh@10 -- # set +x 00:06:40.163 10:54:59 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:40.163 10:54:59 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:40.163 10:54:59 -- common/autotest_common.sh@10 -- # set +x 00:06:40.163 10:54:59 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:43.490 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:43.490 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:43.490 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:43.490 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:43.490 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:43.490 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:43.490 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:43.490 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:43.490 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:43.490 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:43.490 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:43.490 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:43.490 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:43.490 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:43.490 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:43.490 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:44.068 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:06:44.068 10:55:03 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:44.068 10:55:03 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:44.068 10:55:03 -- common/autotest_common.sh@10 -- # set +x 00:06:44.068 10:55:03 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:44.068 10:55:03 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:06:44.068 10:55:03 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:06:44.068 10:55:03 -- common/autotest_common.sh@1577 -- # bdfs=() 00:06:44.068 10:55:03 -- common/autotest_common.sh@1577 -- # local bdfs 00:06:44.069 10:55:03 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:06:44.069 10:55:03 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:44.069 10:55:03 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:44.069 10:55:03 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:44.069 10:55:03 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:44.069 10:55:03 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:44.069 10:55:03 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:06:44.069 10:55:03 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:06:44.069 10:55:03 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:44.069 10:55:03 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:06:44.069 10:55:03 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:06:44.069 10:55:03 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:06:44.069 10:55:03 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:06:44.069 10:55:03 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:5e:00.0 00:06:44.069 10:55:03 -- common/autotest_common.sh@1592 -- # [[ -z 0000:5e:00.0 ]] 00:06:44.069 10:55:03 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=1272873 00:06:44.069 10:55:03 -- common/autotest_common.sh@1598 -- # waitforlisten 1272873 00:06:44.069 10:55:03 -- common/autotest_common.sh@831 -- # '[' -z 1272873 ']' 00:06:44.069 10:55:03 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.069 10:55:03 -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:44.069 10:55:03 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.069 10:55:03 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:44.070 10:55:03 -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:44.070 10:55:03 -- common/autotest_common.sh@10 -- # set +x 00:06:44.332 [2024-07-26 10:55:03.566605] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:44.332 [2024-07-26 10:55:03.566661] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1272873 ] 00:06:44.332 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.332 [2024-07-26 10:55:03.621728] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.332 [2024-07-26 10:55:03.703456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.976 10:55:04 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:44.976 10:55:04 -- common/autotest_common.sh@864 -- # return 0 00:06:44.976 10:55:04 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:06:44.976 10:55:04 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:06:44.976 10:55:04 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:06:48.273 nvme0n1 00:06:48.273 10:55:07 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:06:48.273 [2024-07-26 10:55:07.480504] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:06:48.273 request: 00:06:48.273 { 00:06:48.273 "nvme_ctrlr_name": "nvme0", 00:06:48.273 "password": "test", 00:06:48.273 "method": "bdev_nvme_opal_revert", 00:06:48.273 "req_id": 1 00:06:48.273 } 00:06:48.273 Got JSON-RPC error response 00:06:48.273 response: 00:06:48.273 { 00:06:48.273 "code": -32602, 00:06:48.273 "message": "Invalid parameters" 00:06:48.273 } 00:06:48.273 10:55:07 -- common/autotest_common.sh@1604 -- # true 00:06:48.273 10:55:07 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:06:48.273 10:55:07 -- common/autotest_common.sh@1608 -- # killprocess 1272873 00:06:48.273 10:55:07 -- common/autotest_common.sh@950 -- # '[' -z 1272873 ']' 00:06:48.273 10:55:07 -- common/autotest_common.sh@954 -- # kill -0 1272873 00:06:48.273 10:55:07 -- common/autotest_common.sh@955 -- # uname 00:06:48.273 10:55:07 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:48.273 10:55:07 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1272873 00:06:48.273 10:55:07 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:48.273 10:55:07 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:48.273 10:55:07 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1272873' 00:06:48.273 killing process with pid 1272873 00:06:48.273 10:55:07 -- common/autotest_common.sh@969 -- # kill 1272873 00:06:48.273 10:55:07 -- common/autotest_common.sh@974 -- # wait 1272873 00:06:49.655 10:55:09 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:49.916 10:55:09 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:49.916 10:55:09 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:49.916 10:55:09 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:49.916 10:55:09 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:49.916 10:55:09 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:49.916 10:55:09 -- common/autotest_common.sh@10 -- # set +x 00:06:49.916 10:55:09 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:06:49.916 10:55:09 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:49.916 10:55:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.916 10:55:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.916 10:55:09 -- common/autotest_common.sh@10 -- # set +x 00:06:49.916 ************************************ 00:06:49.916 START TEST env 00:06:49.916 ************************************ 00:06:49.916 10:55:09 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:49.916 * Looking for test storage... 00:06:49.916 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:06:49.916 10:55:09 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:49.916 10:55:09 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.916 10:55:09 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.916 10:55:09 env -- common/autotest_common.sh@10 -- # set +x 00:06:49.916 ************************************ 00:06:49.916 START TEST env_memory 00:06:49.916 ************************************ 00:06:49.916 10:55:09 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:49.916 00:06:49.916 00:06:49.916 CUnit - A unit testing framework for C - Version 2.1-3 00:06:49.916 http://cunit.sourceforge.net/ 00:06:49.916 00:06:49.916 00:06:49.916 Suite: memory 00:06:49.916 Test: alloc and free memory map ...[2024-07-26 10:55:09.345145] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:49.916 passed 00:06:49.916 Test: mem map translation ...[2024-07-26 10:55:09.363130] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:49.916 [2024-07-26 10:55:09.363146] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:49.916 [2024-07-26 10:55:09.363181] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:49.916 [2024-07-26 10:55:09.363189] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:49.916 passed 00:06:49.916 Test: mem map registration ...[2024-07-26 10:55:09.399774] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:49.916 [2024-07-26 10:55:09.399789] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:49.916 passed 00:06:50.177 Test: mem map adjacent registrations ...passed 00:06:50.177 00:06:50.177 Run Summary: Type Total Ran Passed Failed Inactive 00:06:50.177 suites 1 1 n/a 0 0 00:06:50.177 tests 4 4 4 0 0 00:06:50.177 asserts 152 152 152 0 n/a 00:06:50.177 00:06:50.177 Elapsed time = 0.137 seconds 00:06:50.177 00:06:50.177 real 0m0.149s 00:06:50.177 user 0m0.140s 00:06:50.177 sys 0m0.009s 00:06:50.177 10:55:09 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:50.177 10:55:09 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:50.177 ************************************ 00:06:50.177 END TEST env_memory 00:06:50.177 ************************************ 00:06:50.177 10:55:09 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:50.177 10:55:09 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:50.177 10:55:09 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:50.177 10:55:09 env -- common/autotest_common.sh@10 -- # set +x 00:06:50.177 ************************************ 00:06:50.177 START TEST env_vtophys 00:06:50.177 ************************************ 00:06:50.177 10:55:09 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:50.177 EAL: lib.eal log level changed from notice to debug 00:06:50.177 EAL: Detected lcore 0 as core 0 on socket 0 00:06:50.177 EAL: Detected lcore 1 as core 1 on socket 0 00:06:50.177 EAL: Detected lcore 2 as core 2 on socket 0 00:06:50.177 EAL: Detected lcore 3 as core 3 on socket 0 00:06:50.177 EAL: Detected lcore 4 as core 4 on socket 0 00:06:50.177 EAL: Detected lcore 5 as core 5 on socket 0 00:06:50.177 EAL: Detected lcore 6 as core 6 on socket 0 00:06:50.177 EAL: Detected lcore 7 as core 8 on socket 0 00:06:50.177 EAL: Detected lcore 8 as core 9 on socket 0 00:06:50.177 EAL: Detected lcore 9 as core 10 on socket 0 00:06:50.177 EAL: Detected lcore 10 as core 11 on socket 0 00:06:50.177 EAL: Detected lcore 11 as core 12 on socket 0 00:06:50.177 EAL: Detected lcore 12 as core 13 on socket 0 00:06:50.177 EAL: Detected lcore 13 as core 16 on socket 0 00:06:50.177 EAL: Detected lcore 14 as core 17 on socket 0 00:06:50.177 EAL: Detected lcore 15 as core 18 on socket 0 00:06:50.177 EAL: Detected lcore 16 as core 19 on socket 0 00:06:50.177 EAL: Detected lcore 17 as core 20 on socket 0 00:06:50.177 EAL: Detected lcore 18 as core 21 on socket 0 00:06:50.177 EAL: Detected lcore 19 as core 25 on socket 0 00:06:50.177 EAL: Detected lcore 20 as core 26 on socket 0 00:06:50.177 EAL: Detected lcore 21 as core 27 on socket 0 00:06:50.177 EAL: Detected lcore 22 as core 28 on socket 0 00:06:50.177 EAL: Detected lcore 23 as core 29 on socket 0 00:06:50.177 EAL: Detected lcore 24 as core 0 on socket 1 00:06:50.177 EAL: Detected lcore 25 as core 1 on socket 1 00:06:50.177 EAL: Detected lcore 26 as core 2 on socket 1 00:06:50.177 EAL: Detected lcore 27 as core 3 on socket 1 00:06:50.177 EAL: Detected lcore 28 as core 4 on socket 1 00:06:50.177 EAL: Detected lcore 29 as core 5 on socket 1 00:06:50.177 EAL: Detected lcore 30 as core 6 on socket 1 00:06:50.177 EAL: Detected lcore 31 as core 9 on socket 1 00:06:50.177 EAL: Detected lcore 32 as core 10 on socket 1 00:06:50.177 EAL: Detected lcore 33 as core 11 on socket 1 00:06:50.177 EAL: Detected lcore 34 as core 12 on socket 1 00:06:50.177 EAL: Detected lcore 35 as core 13 on socket 1 00:06:50.177 EAL: Detected lcore 36 as core 16 on socket 1 00:06:50.177 EAL: Detected lcore 37 as core 17 on socket 1 00:06:50.177 EAL: Detected lcore 38 as core 18 on socket 1 00:06:50.177 EAL: Detected lcore 39 as core 19 on socket 1 00:06:50.177 EAL: Detected lcore 40 as core 20 on socket 1 00:06:50.177 EAL: Detected lcore 41 as core 21 on socket 1 00:06:50.177 EAL: Detected lcore 42 as core 24 on socket 1 00:06:50.177 EAL: Detected lcore 43 as core 25 on socket 1 00:06:50.184 EAL: Detected lcore 44 as core 26 on socket 1 00:06:50.184 EAL: Detected lcore 45 as core 27 on socket 1 00:06:50.184 EAL: Detected lcore 46 as core 28 on socket 1 00:06:50.184 EAL: Detected lcore 47 as core 29 on socket 1 00:06:50.184 EAL: Detected lcore 48 as core 0 on socket 0 00:06:50.184 EAL: Detected lcore 49 as core 1 on socket 0 00:06:50.184 EAL: Detected lcore 50 as core 2 on socket 0 00:06:50.184 EAL: Detected lcore 51 as core 3 on socket 0 00:06:50.184 EAL: Detected lcore 52 as core 4 on socket 0 00:06:50.184 EAL: Detected lcore 53 as core 5 on socket 0 00:06:50.184 EAL: Detected lcore 54 as core 6 on socket 0 00:06:50.184 EAL: Detected lcore 55 as core 8 on socket 0 00:06:50.184 EAL: Detected lcore 56 as core 9 on socket 0 00:06:50.184 EAL: Detected lcore 57 as core 10 on socket 0 00:06:50.184 EAL: Detected lcore 58 as core 11 on socket 0 00:06:50.184 EAL: Detected lcore 59 as core 12 on socket 0 00:06:50.184 EAL: Detected lcore 60 as core 13 on socket 0 00:06:50.184 EAL: Detected lcore 61 as core 16 on socket 0 00:06:50.184 EAL: Detected lcore 62 as core 17 on socket 0 00:06:50.184 EAL: Detected lcore 63 as core 18 on socket 0 00:06:50.184 EAL: Detected lcore 64 as core 19 on socket 0 00:06:50.184 EAL: Detected lcore 65 as core 20 on socket 0 00:06:50.184 EAL: Detected lcore 66 as core 21 on socket 0 00:06:50.184 EAL: Detected lcore 67 as core 25 on socket 0 00:06:50.184 EAL: Detected lcore 68 as core 26 on socket 0 00:06:50.184 EAL: Detected lcore 69 as core 27 on socket 0 00:06:50.184 EAL: Detected lcore 70 as core 28 on socket 0 00:06:50.184 EAL: Detected lcore 71 as core 29 on socket 0 00:06:50.184 EAL: Detected lcore 72 as core 0 on socket 1 00:06:50.184 EAL: Detected lcore 73 as core 1 on socket 1 00:06:50.184 EAL: Detected lcore 74 as core 2 on socket 1 00:06:50.184 EAL: Detected lcore 75 as core 3 on socket 1 00:06:50.184 EAL: Detected lcore 76 as core 4 on socket 1 00:06:50.184 EAL: Detected lcore 77 as core 5 on socket 1 00:06:50.184 EAL: Detected lcore 78 as core 6 on socket 1 00:06:50.184 EAL: Detected lcore 79 as core 9 on socket 1 00:06:50.184 EAL: Detected lcore 80 as core 10 on socket 1 00:06:50.184 EAL: Detected lcore 81 as core 11 on socket 1 00:06:50.184 EAL: Detected lcore 82 as core 12 on socket 1 00:06:50.184 EAL: Detected lcore 83 as core 13 on socket 1 00:06:50.184 EAL: Detected lcore 84 as core 16 on socket 1 00:06:50.184 EAL: Detected lcore 85 as core 17 on socket 1 00:06:50.184 EAL: Detected lcore 86 as core 18 on socket 1 00:06:50.184 EAL: Detected lcore 87 as core 19 on socket 1 00:06:50.184 EAL: Detected lcore 88 as core 20 on socket 1 00:06:50.184 EAL: Detected lcore 89 as core 21 on socket 1 00:06:50.184 EAL: Detected lcore 90 as core 24 on socket 1 00:06:50.184 EAL: Detected lcore 91 as core 25 on socket 1 00:06:50.184 EAL: Detected lcore 92 as core 26 on socket 1 00:06:50.184 EAL: Detected lcore 93 as core 27 on socket 1 00:06:50.184 EAL: Detected lcore 94 as core 28 on socket 1 00:06:50.184 EAL: Detected lcore 95 as core 29 on socket 1 00:06:50.184 EAL: Maximum logical cores by configuration: 128 00:06:50.184 EAL: Detected CPU lcores: 96 00:06:50.184 EAL: Detected NUMA nodes: 2 00:06:50.185 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:50.185 EAL: Detected shared linkage of DPDK 00:06:50.185 EAL: No shared files mode enabled, IPC will be disabled 00:06:50.185 EAL: Bus pci wants IOVA as 'DC' 00:06:50.185 EAL: Buses did not request a specific IOVA mode. 00:06:50.185 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:50.185 EAL: Selected IOVA mode 'VA' 00:06:50.185 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.185 EAL: Probing VFIO support... 00:06:50.185 EAL: IOMMU type 1 (Type 1) is supported 00:06:50.185 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:50.185 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:50.185 EAL: VFIO support initialized 00:06:50.185 EAL: Ask a virtual area of 0x2e000 bytes 00:06:50.185 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:50.185 EAL: Setting up physically contiguous memory... 00:06:50.185 EAL: Setting maximum number of open files to 524288 00:06:50.185 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:50.185 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:50.185 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:50.185 EAL: Ask a virtual area of 0x61000 bytes 00:06:50.185 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:50.185 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:50.185 EAL: Ask a virtual area of 0x400000000 bytes 00:06:50.185 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:50.185 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:50.185 EAL: Ask a virtual area of 0x61000 bytes 00:06:50.185 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:50.185 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:50.185 EAL: Ask a virtual area of 0x400000000 bytes 00:06:50.185 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:50.185 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:50.185 EAL: Ask a virtual area of 0x61000 bytes 00:06:50.185 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:50.185 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:50.185 EAL: Ask a virtual area of 0x400000000 bytes 00:06:50.185 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:50.185 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:50.185 EAL: Ask a virtual area of 0x61000 bytes 00:06:50.185 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:50.185 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:50.185 EAL: Ask a virtual area of 0x400000000 bytes 00:06:50.185 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:50.185 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:50.185 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:50.185 EAL: Ask a virtual area of 0x61000 bytes 00:06:50.185 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:50.185 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:50.185 EAL: Ask a virtual area of 0x400000000 bytes 00:06:50.185 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:50.185 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:50.185 EAL: Ask a virtual area of 0x61000 bytes 00:06:50.185 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:50.185 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:50.185 EAL: Ask a virtual area of 0x400000000 bytes 00:06:50.185 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:50.185 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:50.185 EAL: Ask a virtual area of 0x61000 bytes 00:06:50.185 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:50.185 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:50.185 EAL: Ask a virtual area of 0x400000000 bytes 00:06:50.185 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:50.185 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:50.185 EAL: Ask a virtual area of 0x61000 bytes 00:06:50.185 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:50.185 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:50.185 EAL: Ask a virtual area of 0x400000000 bytes 00:06:50.185 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:50.185 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:50.185 EAL: Hugepages will be freed exactly as allocated. 00:06:50.185 EAL: No shared files mode enabled, IPC is disabled 00:06:50.185 EAL: No shared files mode enabled, IPC is disabled 00:06:50.185 EAL: TSC frequency is ~2300000 KHz 00:06:50.185 EAL: Main lcore 0 is ready (tid=7ff409f11a00;cpuset=[0]) 00:06:50.185 EAL: Trying to obtain current memory policy. 00:06:50.185 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:50.185 EAL: Restoring previous memory policy: 0 00:06:50.185 EAL: request: mp_malloc_sync 00:06:50.185 EAL: No shared files mode enabled, IPC is disabled 00:06:50.185 EAL: Heap on socket 0 was expanded by 2MB 00:06:50.185 EAL: No shared files mode enabled, IPC is disabled 00:06:50.185 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:50.185 EAL: Mem event callback 'spdk:(nil)' registered 00:06:50.185 00:06:50.185 00:06:50.185 CUnit - A unit testing framework for C - Version 2.1-3 00:06:50.185 http://cunit.sourceforge.net/ 00:06:50.185 00:06:50.185 00:06:50.185 Suite: components_suite 00:06:50.185 Test: vtophys_malloc_test ...passed 00:06:50.185 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:50.185 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:50.185 EAL: Restoring previous memory policy: 4 00:06:50.185 EAL: Calling mem event callback 'spdk:(nil)' 00:06:50.185 EAL: request: mp_malloc_sync 00:06:50.185 EAL: No shared files mode enabled, IPC is disabled 00:06:50.185 EAL: Heap on socket 0 was expanded by 4MB 00:06:50.185 EAL: Calling mem event callback 'spdk:(nil)' 00:06:50.185 EAL: request: mp_malloc_sync 00:06:50.185 EAL: No shared files mode enabled, IPC is disabled 00:06:50.185 EAL: Heap on socket 0 was shrunk by 4MB 00:06:50.185 EAL: Trying to obtain current memory policy. 00:06:50.185 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:50.185 EAL: Restoring previous memory policy: 4 00:06:50.185 EAL: Calling mem event callback 'spdk:(nil)' 00:06:50.185 EAL: request: mp_malloc_sync 00:06:50.185 EAL: No shared files mode enabled, IPC is disabled 00:06:50.185 EAL: Heap on socket 0 was expanded by 6MB 00:06:50.185 EAL: Calling mem event callback 'spdk:(nil)' 00:06:50.185 EAL: request: mp_malloc_sync 00:06:50.185 EAL: No shared files mode enabled, IPC is disabled 00:06:50.185 EAL: Heap on socket 0 was shrunk by 6MB 00:06:50.185 EAL: Trying to obtain current memory policy. 00:06:50.185 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:50.185 EAL: Restoring previous memory policy: 4 00:06:50.185 EAL: Calling mem event callback 'spdk:(nil)' 00:06:50.185 EAL: request: mp_malloc_sync 00:06:50.185 EAL: No shared files mode enabled, IPC is disabled 00:06:50.185 EAL: Heap on socket 0 was expanded by 10MB 00:06:50.185 EAL: Calling mem event callback 'spdk:(nil)' 00:06:50.185 EAL: request: mp_malloc_sync 00:06:50.185 EAL: No shared files mode enabled, IPC is disabled 00:06:50.185 EAL: Heap on socket 0 was shrunk by 10MB 00:06:50.185 EAL: Trying to obtain current memory policy. 00:06:50.185 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:50.185 EAL: Restoring previous memory policy: 4 00:06:50.185 EAL: Calling mem event callback 'spdk:(nil)' 00:06:50.185 EAL: request: mp_malloc_sync 00:06:50.185 EAL: No shared files mode enabled, IPC is disabled 00:06:50.185 EAL: Heap on socket 0 was expanded by 18MB 00:06:50.185 EAL: Calling mem event callback 'spdk:(nil)' 00:06:50.185 EAL: request: mp_malloc_sync 00:06:50.185 EAL: No shared files mode enabled, IPC is disabled 00:06:50.185 EAL: Heap on socket 0 was shrunk by 18MB 00:06:50.185 EAL: Trying to obtain current memory policy. 00:06:50.185 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:50.185 EAL: Restoring previous memory policy: 4 00:06:50.185 EAL: Calling mem event callback 'spdk:(nil)' 00:06:50.185 EAL: request: mp_malloc_sync 00:06:50.185 EAL: No shared files mode enabled, IPC is disabled 00:06:50.185 EAL: Heap on socket 0 was expanded by 34MB 00:06:50.185 EAL: Calling mem event callback 'spdk:(nil)' 00:06:50.185 EAL: request: mp_malloc_sync 00:06:50.185 EAL: No shared files mode enabled, IPC is disabled 00:06:50.185 EAL: Heap on socket 0 was shrunk by 34MB 00:06:50.185 EAL: Trying to obtain current memory policy. 00:06:50.185 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:50.185 EAL: Restoring previous memory policy: 4 00:06:50.185 EAL: Calling mem event callback 'spdk:(nil)' 00:06:50.185 EAL: request: mp_malloc_sync 00:06:50.185 EAL: No shared files mode enabled, IPC is disabled 00:06:50.185 EAL: Heap on socket 0 was expanded by 66MB 00:06:50.185 EAL: Calling mem event callback 'spdk:(nil)' 00:06:50.185 EAL: request: mp_malloc_sync 00:06:50.185 EAL: No shared files mode enabled, IPC is disabled 00:06:50.185 EAL: Heap on socket 0 was shrunk by 66MB 00:06:50.185 EAL: Trying to obtain current memory policy. 00:06:50.185 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:50.185 EAL: Restoring previous memory policy: 4 00:06:50.185 EAL: Calling mem event callback 'spdk:(nil)' 00:06:50.185 EAL: request: mp_malloc_sync 00:06:50.185 EAL: No shared files mode enabled, IPC is disabled 00:06:50.185 EAL: Heap on socket 0 was expanded by 130MB 00:06:50.446 EAL: Calling mem event callback 'spdk:(nil)' 00:06:50.446 EAL: request: mp_malloc_sync 00:06:50.446 EAL: No shared files mode enabled, IPC is disabled 00:06:50.446 EAL: Heap on socket 0 was shrunk by 130MB 00:06:50.446 EAL: Trying to obtain current memory policy. 00:06:50.446 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:50.446 EAL: Restoring previous memory policy: 4 00:06:50.446 EAL: Calling mem event callback 'spdk:(nil)' 00:06:50.446 EAL: request: mp_malloc_sync 00:06:50.446 EAL: No shared files mode enabled, IPC is disabled 00:06:50.446 EAL: Heap on socket 0 was expanded by 258MB 00:06:50.446 EAL: Calling mem event callback 'spdk:(nil)' 00:06:50.446 EAL: request: mp_malloc_sync 00:06:50.446 EAL: No shared files mode enabled, IPC is disabled 00:06:50.446 EAL: Heap on socket 0 was shrunk by 258MB 00:06:50.446 EAL: Trying to obtain current memory policy. 00:06:50.446 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:50.446 EAL: Restoring previous memory policy: 4 00:06:50.446 EAL: Calling mem event callback 'spdk:(nil)' 00:06:50.446 EAL: request: mp_malloc_sync 00:06:50.446 EAL: No shared files mode enabled, IPC is disabled 00:06:50.446 EAL: Heap on socket 0 was expanded by 514MB 00:06:50.706 EAL: Calling mem event callback 'spdk:(nil)' 00:06:50.706 EAL: request: mp_malloc_sync 00:06:50.706 EAL: No shared files mode enabled, IPC is disabled 00:06:50.706 EAL: Heap on socket 0 was shrunk by 514MB 00:06:50.706 EAL: Trying to obtain current memory policy. 00:06:50.706 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:50.966 EAL: Restoring previous memory policy: 4 00:06:50.966 EAL: Calling mem event callback 'spdk:(nil)' 00:06:50.966 EAL: request: mp_malloc_sync 00:06:50.966 EAL: No shared files mode enabled, IPC is disabled 00:06:50.966 EAL: Heap on socket 0 was expanded by 1026MB 00:06:50.966 EAL: Calling mem event callback 'spdk:(nil)' 00:06:51.225 EAL: request: mp_malloc_sync 00:06:51.225 EAL: No shared files mode enabled, IPC is disabled 00:06:51.225 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:51.225 passed 00:06:51.225 00:06:51.225 Run Summary: Type Total Ran Passed Failed Inactive 00:06:51.225 suites 1 1 n/a 0 0 00:06:51.225 tests 2 2 2 0 0 00:06:51.225 asserts 497 497 497 0 n/a 00:06:51.225 00:06:51.225 Elapsed time = 0.965 seconds 00:06:51.225 EAL: Calling mem event callback 'spdk:(nil)' 00:06:51.225 EAL: request: mp_malloc_sync 00:06:51.225 EAL: No shared files mode enabled, IPC is disabled 00:06:51.225 EAL: Heap on socket 0 was shrunk by 2MB 00:06:51.225 EAL: No shared files mode enabled, IPC is disabled 00:06:51.225 EAL: No shared files mode enabled, IPC is disabled 00:06:51.225 EAL: No shared files mode enabled, IPC is disabled 00:06:51.225 00:06:51.225 real 0m1.074s 00:06:51.225 user 0m0.635s 00:06:51.225 sys 0m0.410s 00:06:51.225 10:55:10 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.225 10:55:10 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:51.225 ************************************ 00:06:51.225 END TEST env_vtophys 00:06:51.225 ************************************ 00:06:51.225 10:55:10 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:51.225 10:55:10 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:51.225 10:55:10 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.225 10:55:10 env -- common/autotest_common.sh@10 -- # set +x 00:06:51.225 ************************************ 00:06:51.225 START TEST env_pci 00:06:51.225 ************************************ 00:06:51.225 10:55:10 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:51.225 00:06:51.225 00:06:51.225 CUnit - A unit testing framework for C - Version 2.1-3 00:06:51.225 http://cunit.sourceforge.net/ 00:06:51.225 00:06:51.225 00:06:51.225 Suite: pci 00:06:51.225 Test: pci_hook ...[2024-07-26 10:55:10.679909] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1274215 has claimed it 00:06:51.225 EAL: Cannot find device (10000:00:01.0) 00:06:51.225 EAL: Failed to attach device on primary process 00:06:51.225 passed 00:06:51.225 00:06:51.225 Run Summary: Type Total Ran Passed Failed Inactive 00:06:51.225 suites 1 1 n/a 0 0 00:06:51.225 tests 1 1 1 0 0 00:06:51.225 asserts 25 25 25 0 n/a 00:06:51.225 00:06:51.225 Elapsed time = 0.026 seconds 00:06:51.225 00:06:51.225 real 0m0.045s 00:06:51.225 user 0m0.014s 00:06:51.225 sys 0m0.031s 00:06:51.225 10:55:10 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.225 10:55:10 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:51.225 ************************************ 00:06:51.225 END TEST env_pci 00:06:51.225 ************************************ 00:06:51.485 10:55:10 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:51.485 10:55:10 env -- env/env.sh@15 -- # uname 00:06:51.485 10:55:10 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:51.485 10:55:10 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:51.485 10:55:10 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:51.485 10:55:10 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:51.485 10:55:10 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.485 10:55:10 env -- common/autotest_common.sh@10 -- # set +x 00:06:51.485 ************************************ 00:06:51.485 START TEST env_dpdk_post_init 00:06:51.485 ************************************ 00:06:51.485 10:55:10 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:51.485 EAL: Detected CPU lcores: 96 00:06:51.485 EAL: Detected NUMA nodes: 2 00:06:51.485 EAL: Detected shared linkage of DPDK 00:06:51.485 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:51.485 EAL: Selected IOVA mode 'VA' 00:06:51.485 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.485 EAL: VFIO support initialized 00:06:51.485 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:51.485 EAL: Using IOMMU type 1 (Type 1) 00:06:51.485 EAL: Ignore mapping IO port bar(1) 00:06:51.485 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:06:51.485 EAL: Ignore mapping IO port bar(1) 00:06:51.485 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:06:51.485 EAL: Ignore mapping IO port bar(1) 00:06:51.485 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:06:51.485 EAL: Ignore mapping IO port bar(1) 00:06:51.485 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:06:51.485 EAL: Ignore mapping IO port bar(1) 00:06:51.485 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:06:51.485 EAL: Ignore mapping IO port bar(1) 00:06:51.485 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:06:51.485 EAL: Ignore mapping IO port bar(1) 00:06:51.485 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:06:51.745 EAL: Ignore mapping IO port bar(1) 00:06:51.745 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:06:52.316 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:06:52.316 EAL: Ignore mapping IO port bar(1) 00:06:52.316 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:06:52.316 EAL: Ignore mapping IO port bar(1) 00:06:52.316 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:06:52.316 EAL: Ignore mapping IO port bar(1) 00:06:52.316 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:06:52.316 EAL: Ignore mapping IO port bar(1) 00:06:52.316 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:06:52.316 EAL: Ignore mapping IO port bar(1) 00:06:52.316 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:06:52.316 EAL: Ignore mapping IO port bar(1) 00:06:52.316 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:06:52.316 EAL: Ignore mapping IO port bar(1) 00:06:52.316 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:06:52.576 EAL: Ignore mapping IO port bar(1) 00:06:52.576 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:06:55.872 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:06:55.872 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:06:55.872 Starting DPDK initialization... 00:06:55.872 Starting SPDK post initialization... 00:06:55.872 SPDK NVMe probe 00:06:55.872 Attaching to 0000:5e:00.0 00:06:55.872 Attached to 0000:5e:00.0 00:06:55.872 Cleaning up... 00:06:55.872 00:06:55.872 real 0m4.335s 00:06:55.872 user 0m3.299s 00:06:55.872 sys 0m0.113s 00:06:55.872 10:55:15 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.872 10:55:15 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:55.872 ************************************ 00:06:55.872 END TEST env_dpdk_post_init 00:06:55.872 ************************************ 00:06:55.872 10:55:15 env -- env/env.sh@26 -- # uname 00:06:55.872 10:55:15 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:55.872 10:55:15 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:55.872 10:55:15 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:55.872 10:55:15 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.872 10:55:15 env -- common/autotest_common.sh@10 -- # set +x 00:06:55.872 ************************************ 00:06:55.872 START TEST env_mem_callbacks 00:06:55.872 ************************************ 00:06:55.872 10:55:15 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:55.872 EAL: Detected CPU lcores: 96 00:06:55.872 EAL: Detected NUMA nodes: 2 00:06:55.872 EAL: Detected shared linkage of DPDK 00:06:55.872 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:55.872 EAL: Selected IOVA mode 'VA' 00:06:55.872 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.872 EAL: VFIO support initialized 00:06:55.872 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:55.872 00:06:55.872 00:06:55.872 CUnit - A unit testing framework for C - Version 2.1-3 00:06:55.872 http://cunit.sourceforge.net/ 00:06:55.872 00:06:55.872 00:06:55.872 Suite: memory 00:06:55.872 Test: test ... 00:06:55.872 register 0x200000200000 2097152 00:06:55.872 malloc 3145728 00:06:55.872 register 0x200000400000 4194304 00:06:55.872 buf 0x200000500000 len 3145728 PASSED 00:06:55.872 malloc 64 00:06:55.872 buf 0x2000004fff40 len 64 PASSED 00:06:55.872 malloc 4194304 00:06:55.872 register 0x200000800000 6291456 00:06:55.872 buf 0x200000a00000 len 4194304 PASSED 00:06:55.872 free 0x200000500000 3145728 00:06:55.872 free 0x2000004fff40 64 00:06:55.872 unregister 0x200000400000 4194304 PASSED 00:06:55.872 free 0x200000a00000 4194304 00:06:55.872 unregister 0x200000800000 6291456 PASSED 00:06:55.872 malloc 8388608 00:06:55.872 register 0x200000400000 10485760 00:06:55.872 buf 0x200000600000 len 8388608 PASSED 00:06:55.872 free 0x200000600000 8388608 00:06:55.872 unregister 0x200000400000 10485760 PASSED 00:06:55.872 passed 00:06:55.872 00:06:55.872 Run Summary: Type Total Ran Passed Failed Inactive 00:06:55.872 suites 1 1 n/a 0 0 00:06:55.872 tests 1 1 1 0 0 00:06:55.872 asserts 15 15 15 0 n/a 00:06:55.872 00:06:55.872 Elapsed time = 0.005 seconds 00:06:55.872 00:06:55.872 real 0m0.052s 00:06:55.872 user 0m0.017s 00:06:55.872 sys 0m0.035s 00:06:55.872 10:55:15 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.872 10:55:15 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:55.872 ************************************ 00:06:55.872 END TEST env_mem_callbacks 00:06:55.872 ************************************ 00:06:55.872 00:06:55.872 real 0m6.084s 00:06:55.872 user 0m4.285s 00:06:55.872 sys 0m0.879s 00:06:55.872 10:55:15 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.872 10:55:15 env -- common/autotest_common.sh@10 -- # set +x 00:06:55.872 ************************************ 00:06:55.872 END TEST env 00:06:55.872 ************************************ 00:06:55.872 10:55:15 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:55.872 10:55:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:55.872 10:55:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.872 10:55:15 -- common/autotest_common.sh@10 -- # set +x 00:06:55.872 ************************************ 00:06:55.872 START TEST rpc 00:06:55.872 ************************************ 00:06:55.872 10:55:15 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:56.133 * Looking for test storage... 00:06:56.133 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:56.133 10:55:15 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1275033 00:06:56.133 10:55:15 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:56.133 10:55:15 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1275033 00:06:56.133 10:55:15 rpc -- common/autotest_common.sh@831 -- # '[' -z 1275033 ']' 00:06:56.133 10:55:15 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:56.133 10:55:15 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.133 10:55:15 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:56.133 10:55:15 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.133 10:55:15 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:56.133 10:55:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.133 [2024-07-26 10:55:15.473060] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:56.133 [2024-07-26 10:55:15.473106] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1275033 ] 00:06:56.133 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.133 [2024-07-26 10:55:15.525571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.133 [2024-07-26 10:55:15.604878] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:56.133 [2024-07-26 10:55:15.604911] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1275033' to capture a snapshot of events at runtime. 00:06:56.133 [2024-07-26 10:55:15.604918] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:56.133 [2024-07-26 10:55:15.604924] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:56.133 [2024-07-26 10:55:15.604929] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1275033 for offline analysis/debug. 00:06:56.133 [2024-07-26 10:55:15.604946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.072 10:55:16 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:57.072 10:55:16 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:57.073 10:55:16 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:57.073 10:55:16 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:57.073 10:55:16 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:57.073 10:55:16 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:57.073 10:55:16 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:57.073 10:55:16 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.073 10:55:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.073 ************************************ 00:06:57.073 START TEST rpc_integrity 00:06:57.073 ************************************ 00:06:57.073 10:55:16 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:57.073 10:55:16 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:57.073 10:55:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.073 10:55:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.073 10:55:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.073 10:55:16 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:57.073 10:55:16 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:57.073 10:55:16 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:57.073 10:55:16 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:57.073 10:55:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.073 10:55:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.073 10:55:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.073 10:55:16 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:57.073 10:55:16 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:57.073 10:55:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.073 10:55:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.073 10:55:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.073 10:55:16 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:57.073 { 00:06:57.073 "name": "Malloc0", 00:06:57.073 "aliases": [ 00:06:57.073 "48e3e505-89cc-44a9-94f4-352d77cf0a79" 00:06:57.073 ], 00:06:57.073 "product_name": "Malloc disk", 00:06:57.073 "block_size": 512, 00:06:57.073 "num_blocks": 16384, 00:06:57.073 "uuid": "48e3e505-89cc-44a9-94f4-352d77cf0a79", 00:06:57.073 "assigned_rate_limits": { 00:06:57.073 "rw_ios_per_sec": 0, 00:06:57.073 "rw_mbytes_per_sec": 0, 00:06:57.073 "r_mbytes_per_sec": 0, 00:06:57.073 "w_mbytes_per_sec": 0 00:06:57.073 }, 00:06:57.073 "claimed": false, 00:06:57.073 "zoned": false, 00:06:57.073 "supported_io_types": { 00:06:57.073 "read": true, 00:06:57.073 "write": true, 00:06:57.073 "unmap": true, 00:06:57.073 "flush": true, 00:06:57.073 "reset": true, 00:06:57.073 "nvme_admin": false, 00:06:57.073 "nvme_io": false, 00:06:57.073 "nvme_io_md": false, 00:06:57.073 "write_zeroes": true, 00:06:57.073 "zcopy": true, 00:06:57.073 "get_zone_info": false, 00:06:57.073 "zone_management": false, 00:06:57.073 "zone_append": false, 00:06:57.073 "compare": false, 00:06:57.073 "compare_and_write": false, 00:06:57.073 "abort": true, 00:06:57.073 "seek_hole": false, 00:06:57.073 "seek_data": false, 00:06:57.073 "copy": true, 00:06:57.073 "nvme_iov_md": false 00:06:57.073 }, 00:06:57.073 "memory_domains": [ 00:06:57.073 { 00:06:57.073 "dma_device_id": "system", 00:06:57.073 "dma_device_type": 1 00:06:57.073 }, 00:06:57.073 { 00:06:57.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:57.073 "dma_device_type": 2 00:06:57.073 } 00:06:57.073 ], 00:06:57.073 "driver_specific": {} 00:06:57.073 } 00:06:57.073 ]' 00:06:57.073 10:55:16 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:57.073 10:55:16 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:57.073 10:55:16 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:57.073 10:55:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.073 10:55:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.073 [2024-07-26 10:55:16.410078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:57.073 [2024-07-26 10:55:16.410106] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:57.073 [2024-07-26 10:55:16.410120] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1eba2d0 00:06:57.073 [2024-07-26 10:55:16.410126] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:57.073 [2024-07-26 10:55:16.411196] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:57.073 [2024-07-26 10:55:16.411216] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:57.073 Passthru0 00:06:57.073 10:55:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.073 10:55:16 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:57.073 10:55:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.073 10:55:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.073 10:55:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.073 10:55:16 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:57.073 { 00:06:57.073 "name": "Malloc0", 00:06:57.073 "aliases": [ 00:06:57.073 "48e3e505-89cc-44a9-94f4-352d77cf0a79" 00:06:57.073 ], 00:06:57.073 "product_name": "Malloc disk", 00:06:57.073 "block_size": 512, 00:06:57.073 "num_blocks": 16384, 00:06:57.073 "uuid": "48e3e505-89cc-44a9-94f4-352d77cf0a79", 00:06:57.073 "assigned_rate_limits": { 00:06:57.073 "rw_ios_per_sec": 0, 00:06:57.073 "rw_mbytes_per_sec": 0, 00:06:57.073 "r_mbytes_per_sec": 0, 00:06:57.073 "w_mbytes_per_sec": 0 00:06:57.073 }, 00:06:57.073 "claimed": true, 00:06:57.073 "claim_type": "exclusive_write", 00:06:57.073 "zoned": false, 00:06:57.073 "supported_io_types": { 00:06:57.073 "read": true, 00:06:57.073 "write": true, 00:06:57.073 "unmap": true, 00:06:57.073 "flush": true, 00:06:57.073 "reset": true, 00:06:57.073 "nvme_admin": false, 00:06:57.073 "nvme_io": false, 00:06:57.073 "nvme_io_md": false, 00:06:57.073 "write_zeroes": true, 00:06:57.073 "zcopy": true, 00:06:57.073 "get_zone_info": false, 00:06:57.073 "zone_management": false, 00:06:57.073 "zone_append": false, 00:06:57.073 "compare": false, 00:06:57.073 "compare_and_write": false, 00:06:57.073 "abort": true, 00:06:57.073 "seek_hole": false, 00:06:57.073 "seek_data": false, 00:06:57.073 "copy": true, 00:06:57.073 "nvme_iov_md": false 00:06:57.073 }, 00:06:57.073 "memory_domains": [ 00:06:57.073 { 00:06:57.073 "dma_device_id": "system", 00:06:57.073 "dma_device_type": 1 00:06:57.073 }, 00:06:57.073 { 00:06:57.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:57.073 "dma_device_type": 2 00:06:57.073 } 00:06:57.073 ], 00:06:57.073 "driver_specific": {} 00:06:57.073 }, 00:06:57.073 { 00:06:57.073 "name": "Passthru0", 00:06:57.073 "aliases": [ 00:06:57.073 "03a9a714-a37d-560d-8129-26ca72175c86" 00:06:57.074 ], 00:06:57.074 "product_name": "passthru", 00:06:57.074 "block_size": 512, 00:06:57.074 "num_blocks": 16384, 00:06:57.074 "uuid": "03a9a714-a37d-560d-8129-26ca72175c86", 00:06:57.074 "assigned_rate_limits": { 00:06:57.074 "rw_ios_per_sec": 0, 00:06:57.074 "rw_mbytes_per_sec": 0, 00:06:57.074 "r_mbytes_per_sec": 0, 00:06:57.074 "w_mbytes_per_sec": 0 00:06:57.074 }, 00:06:57.074 "claimed": false, 00:06:57.074 "zoned": false, 00:06:57.074 "supported_io_types": { 00:06:57.074 "read": true, 00:06:57.074 "write": true, 00:06:57.074 "unmap": true, 00:06:57.074 "flush": true, 00:06:57.074 "reset": true, 00:06:57.074 "nvme_admin": false, 00:06:57.074 "nvme_io": false, 00:06:57.074 "nvme_io_md": false, 00:06:57.074 "write_zeroes": true, 00:06:57.074 "zcopy": true, 00:06:57.074 "get_zone_info": false, 00:06:57.074 "zone_management": false, 00:06:57.074 "zone_append": false, 00:06:57.074 "compare": false, 00:06:57.074 "compare_and_write": false, 00:06:57.074 "abort": true, 00:06:57.074 "seek_hole": false, 00:06:57.074 "seek_data": false, 00:06:57.074 "copy": true, 00:06:57.074 "nvme_iov_md": false 00:06:57.074 }, 00:06:57.074 "memory_domains": [ 00:06:57.074 { 00:06:57.074 "dma_device_id": "system", 00:06:57.074 "dma_device_type": 1 00:06:57.074 }, 00:06:57.074 { 00:06:57.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:57.074 "dma_device_type": 2 00:06:57.074 } 00:06:57.074 ], 00:06:57.074 "driver_specific": { 00:06:57.074 "passthru": { 00:06:57.074 "name": "Passthru0", 00:06:57.074 "base_bdev_name": "Malloc0" 00:06:57.074 } 00:06:57.074 } 00:06:57.074 } 00:06:57.074 ]' 00:06:57.074 10:55:16 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:57.074 10:55:16 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:57.074 10:55:16 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:57.074 10:55:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.074 10:55:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.074 10:55:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.074 10:55:16 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:57.074 10:55:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.074 10:55:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.074 10:55:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.074 10:55:16 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:57.074 10:55:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.074 10:55:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.074 10:55:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.074 10:55:16 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:57.074 10:55:16 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:57.074 10:55:16 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:57.074 00:06:57.074 real 0m0.260s 00:06:57.074 user 0m0.155s 00:06:57.074 sys 0m0.034s 00:06:57.074 10:55:16 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.074 10:55:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.074 ************************************ 00:06:57.074 END TEST rpc_integrity 00:06:57.074 ************************************ 00:06:57.074 10:55:16 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:57.074 10:55:16 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:57.074 10:55:16 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.074 10:55:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.334 ************************************ 00:06:57.334 START TEST rpc_plugins 00:06:57.334 ************************************ 00:06:57.334 10:55:16 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:57.334 10:55:16 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:57.334 10:55:16 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.334 10:55:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:57.334 10:55:16 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.334 10:55:16 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:57.334 10:55:16 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:57.334 10:55:16 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.334 10:55:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:57.334 10:55:16 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.334 10:55:16 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:57.334 { 00:06:57.334 "name": "Malloc1", 00:06:57.334 "aliases": [ 00:06:57.334 "715f5127-9545-4a05-b898-b30994675409" 00:06:57.334 ], 00:06:57.334 "product_name": "Malloc disk", 00:06:57.334 "block_size": 4096, 00:06:57.334 "num_blocks": 256, 00:06:57.334 "uuid": "715f5127-9545-4a05-b898-b30994675409", 00:06:57.334 "assigned_rate_limits": { 00:06:57.334 "rw_ios_per_sec": 0, 00:06:57.334 "rw_mbytes_per_sec": 0, 00:06:57.334 "r_mbytes_per_sec": 0, 00:06:57.334 "w_mbytes_per_sec": 0 00:06:57.334 }, 00:06:57.334 "claimed": false, 00:06:57.334 "zoned": false, 00:06:57.334 "supported_io_types": { 00:06:57.334 "read": true, 00:06:57.334 "write": true, 00:06:57.334 "unmap": true, 00:06:57.334 "flush": true, 00:06:57.334 "reset": true, 00:06:57.334 "nvme_admin": false, 00:06:57.334 "nvme_io": false, 00:06:57.334 "nvme_io_md": false, 00:06:57.334 "write_zeroes": true, 00:06:57.334 "zcopy": true, 00:06:57.334 "get_zone_info": false, 00:06:57.334 "zone_management": false, 00:06:57.334 "zone_append": false, 00:06:57.334 "compare": false, 00:06:57.334 "compare_and_write": false, 00:06:57.334 "abort": true, 00:06:57.334 "seek_hole": false, 00:06:57.334 "seek_data": false, 00:06:57.334 "copy": true, 00:06:57.334 "nvme_iov_md": false 00:06:57.334 }, 00:06:57.334 "memory_domains": [ 00:06:57.334 { 00:06:57.334 "dma_device_id": "system", 00:06:57.334 "dma_device_type": 1 00:06:57.334 }, 00:06:57.334 { 00:06:57.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:57.334 "dma_device_type": 2 00:06:57.334 } 00:06:57.334 ], 00:06:57.334 "driver_specific": {} 00:06:57.334 } 00:06:57.334 ]' 00:06:57.334 10:55:16 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:57.334 10:55:16 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:57.334 10:55:16 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:57.334 10:55:16 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.334 10:55:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:57.334 10:55:16 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.334 10:55:16 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:57.334 10:55:16 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.334 10:55:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:57.334 10:55:16 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.334 10:55:16 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:57.334 10:55:16 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:57.334 10:55:16 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:57.334 00:06:57.334 real 0m0.137s 00:06:57.334 user 0m0.090s 00:06:57.334 sys 0m0.015s 00:06:57.334 10:55:16 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.334 10:55:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:57.334 ************************************ 00:06:57.334 END TEST rpc_plugins 00:06:57.334 ************************************ 00:06:57.334 10:55:16 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:57.334 10:55:16 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:57.334 10:55:16 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.334 10:55:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.334 ************************************ 00:06:57.334 START TEST rpc_trace_cmd_test 00:06:57.334 ************************************ 00:06:57.334 10:55:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:57.334 10:55:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:57.335 10:55:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:57.335 10:55:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.335 10:55:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.335 10:55:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.335 10:55:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:57.335 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1275033", 00:06:57.335 "tpoint_group_mask": "0x8", 00:06:57.335 "iscsi_conn": { 00:06:57.335 "mask": "0x2", 00:06:57.335 "tpoint_mask": "0x0" 00:06:57.335 }, 00:06:57.335 "scsi": { 00:06:57.335 "mask": "0x4", 00:06:57.335 "tpoint_mask": "0x0" 00:06:57.335 }, 00:06:57.335 "bdev": { 00:06:57.335 "mask": "0x8", 00:06:57.335 "tpoint_mask": "0xffffffffffffffff" 00:06:57.335 }, 00:06:57.335 "nvmf_rdma": { 00:06:57.335 "mask": "0x10", 00:06:57.335 "tpoint_mask": "0x0" 00:06:57.335 }, 00:06:57.335 "nvmf_tcp": { 00:06:57.335 "mask": "0x20", 00:06:57.335 "tpoint_mask": "0x0" 00:06:57.335 }, 00:06:57.335 "ftl": { 00:06:57.335 "mask": "0x40", 00:06:57.335 "tpoint_mask": "0x0" 00:06:57.335 }, 00:06:57.335 "blobfs": { 00:06:57.335 "mask": "0x80", 00:06:57.335 "tpoint_mask": "0x0" 00:06:57.335 }, 00:06:57.335 "dsa": { 00:06:57.335 "mask": "0x200", 00:06:57.335 "tpoint_mask": "0x0" 00:06:57.335 }, 00:06:57.335 "thread": { 00:06:57.335 "mask": "0x400", 00:06:57.335 "tpoint_mask": "0x0" 00:06:57.335 }, 00:06:57.335 "nvme_pcie": { 00:06:57.335 "mask": "0x800", 00:06:57.335 "tpoint_mask": "0x0" 00:06:57.335 }, 00:06:57.335 "iaa": { 00:06:57.335 "mask": "0x1000", 00:06:57.335 "tpoint_mask": "0x0" 00:06:57.335 }, 00:06:57.335 "nvme_tcp": { 00:06:57.335 "mask": "0x2000", 00:06:57.335 "tpoint_mask": "0x0" 00:06:57.335 }, 00:06:57.335 "bdev_nvme": { 00:06:57.335 "mask": "0x4000", 00:06:57.335 "tpoint_mask": "0x0" 00:06:57.335 }, 00:06:57.335 "sock": { 00:06:57.335 "mask": "0x8000", 00:06:57.335 "tpoint_mask": "0x0" 00:06:57.335 } 00:06:57.335 }' 00:06:57.335 10:55:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:57.595 10:55:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:57.595 10:55:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:57.595 10:55:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:57.595 10:55:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:57.595 10:55:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:57.595 10:55:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:57.595 10:55:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:57.595 10:55:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:57.596 10:55:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:57.596 00:06:57.596 real 0m0.215s 00:06:57.596 user 0m0.190s 00:06:57.596 sys 0m0.018s 00:06:57.596 10:55:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.596 10:55:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.596 ************************************ 00:06:57.596 END TEST rpc_trace_cmd_test 00:06:57.596 ************************************ 00:06:57.596 10:55:17 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:57.596 10:55:17 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:57.596 10:55:17 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:57.596 10:55:17 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:57.596 10:55:17 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.596 10:55:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.596 ************************************ 00:06:57.596 START TEST rpc_daemon_integrity 00:06:57.596 ************************************ 00:06:57.596 10:55:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:57.596 10:55:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:57.596 10:55:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.596 10:55:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.596 10:55:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.596 10:55:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:57.596 10:55:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:57.856 10:55:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:57.856 10:55:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:57.856 10:55:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.856 10:55:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.856 10:55:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.856 10:55:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:57.856 10:55:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:57.856 10:55:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.856 10:55:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.856 10:55:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.856 10:55:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:57.856 { 00:06:57.856 "name": "Malloc2", 00:06:57.856 "aliases": [ 00:06:57.856 "b30d7281-6f35-4557-8e45-c3ba1ee6b7bf" 00:06:57.856 ], 00:06:57.856 "product_name": "Malloc disk", 00:06:57.856 "block_size": 512, 00:06:57.856 "num_blocks": 16384, 00:06:57.856 "uuid": "b30d7281-6f35-4557-8e45-c3ba1ee6b7bf", 00:06:57.856 "assigned_rate_limits": { 00:06:57.856 "rw_ios_per_sec": 0, 00:06:57.856 "rw_mbytes_per_sec": 0, 00:06:57.856 "r_mbytes_per_sec": 0, 00:06:57.856 "w_mbytes_per_sec": 0 00:06:57.856 }, 00:06:57.856 "claimed": false, 00:06:57.856 "zoned": false, 00:06:57.856 "supported_io_types": { 00:06:57.856 "read": true, 00:06:57.856 "write": true, 00:06:57.856 "unmap": true, 00:06:57.856 "flush": true, 00:06:57.857 "reset": true, 00:06:57.857 "nvme_admin": false, 00:06:57.857 "nvme_io": false, 00:06:57.857 "nvme_io_md": false, 00:06:57.857 "write_zeroes": true, 00:06:57.857 "zcopy": true, 00:06:57.857 "get_zone_info": false, 00:06:57.857 "zone_management": false, 00:06:57.857 "zone_append": false, 00:06:57.857 "compare": false, 00:06:57.857 "compare_and_write": false, 00:06:57.857 "abort": true, 00:06:57.857 "seek_hole": false, 00:06:57.857 "seek_data": false, 00:06:57.857 "copy": true, 00:06:57.857 "nvme_iov_md": false 00:06:57.857 }, 00:06:57.857 "memory_domains": [ 00:06:57.857 { 00:06:57.857 "dma_device_id": "system", 00:06:57.857 "dma_device_type": 1 00:06:57.857 }, 00:06:57.857 { 00:06:57.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:57.857 "dma_device_type": 2 00:06:57.857 } 00:06:57.857 ], 00:06:57.857 "driver_specific": {} 00:06:57.857 } 00:06:57.857 ]' 00:06:57.857 10:55:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:57.857 10:55:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:57.857 10:55:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:57.857 10:55:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.857 10:55:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.857 [2024-07-26 10:55:17.204231] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:57.857 [2024-07-26 10:55:17.204259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:57.857 [2024-07-26 10:55:17.204271] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2051ac0 00:06:57.857 [2024-07-26 10:55:17.204278] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:57.857 [2024-07-26 10:55:17.205230] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:57.857 [2024-07-26 10:55:17.205251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:57.857 Passthru0 00:06:57.857 10:55:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.857 10:55:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:57.857 10:55:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.857 10:55:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.857 10:55:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.857 10:55:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:57.857 { 00:06:57.857 "name": "Malloc2", 00:06:57.857 "aliases": [ 00:06:57.857 "b30d7281-6f35-4557-8e45-c3ba1ee6b7bf" 00:06:57.857 ], 00:06:57.857 "product_name": "Malloc disk", 00:06:57.857 "block_size": 512, 00:06:57.857 "num_blocks": 16384, 00:06:57.857 "uuid": "b30d7281-6f35-4557-8e45-c3ba1ee6b7bf", 00:06:57.857 "assigned_rate_limits": { 00:06:57.857 "rw_ios_per_sec": 0, 00:06:57.857 "rw_mbytes_per_sec": 0, 00:06:57.857 "r_mbytes_per_sec": 0, 00:06:57.857 "w_mbytes_per_sec": 0 00:06:57.857 }, 00:06:57.857 "claimed": true, 00:06:57.857 "claim_type": "exclusive_write", 00:06:57.857 "zoned": false, 00:06:57.857 "supported_io_types": { 00:06:57.857 "read": true, 00:06:57.857 "write": true, 00:06:57.857 "unmap": true, 00:06:57.857 "flush": true, 00:06:57.857 "reset": true, 00:06:57.857 "nvme_admin": false, 00:06:57.857 "nvme_io": false, 00:06:57.857 "nvme_io_md": false, 00:06:57.857 "write_zeroes": true, 00:06:57.857 "zcopy": true, 00:06:57.857 "get_zone_info": false, 00:06:57.857 "zone_management": false, 00:06:57.857 "zone_append": false, 00:06:57.857 "compare": false, 00:06:57.857 "compare_and_write": false, 00:06:57.857 "abort": true, 00:06:57.857 "seek_hole": false, 00:06:57.857 "seek_data": false, 00:06:57.857 "copy": true, 00:06:57.857 "nvme_iov_md": false 00:06:57.857 }, 00:06:57.857 "memory_domains": [ 00:06:57.857 { 00:06:57.857 "dma_device_id": "system", 00:06:57.857 "dma_device_type": 1 00:06:57.857 }, 00:06:57.857 { 00:06:57.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:57.857 "dma_device_type": 2 00:06:57.857 } 00:06:57.857 ], 00:06:57.857 "driver_specific": {} 00:06:57.857 }, 00:06:57.857 { 00:06:57.857 "name": "Passthru0", 00:06:57.857 "aliases": [ 00:06:57.857 "4f320d90-8434-57fd-88dc-9e4f486b0cf9" 00:06:57.857 ], 00:06:57.857 "product_name": "passthru", 00:06:57.857 "block_size": 512, 00:06:57.857 "num_blocks": 16384, 00:06:57.857 "uuid": "4f320d90-8434-57fd-88dc-9e4f486b0cf9", 00:06:57.857 "assigned_rate_limits": { 00:06:57.857 "rw_ios_per_sec": 0, 00:06:57.857 "rw_mbytes_per_sec": 0, 00:06:57.857 "r_mbytes_per_sec": 0, 00:06:57.857 "w_mbytes_per_sec": 0 00:06:57.857 }, 00:06:57.857 "claimed": false, 00:06:57.857 "zoned": false, 00:06:57.857 "supported_io_types": { 00:06:57.857 "read": true, 00:06:57.857 "write": true, 00:06:57.857 "unmap": true, 00:06:57.857 "flush": true, 00:06:57.857 "reset": true, 00:06:57.857 "nvme_admin": false, 00:06:57.857 "nvme_io": false, 00:06:57.857 "nvme_io_md": false, 00:06:57.857 "write_zeroes": true, 00:06:57.857 "zcopy": true, 00:06:57.857 "get_zone_info": false, 00:06:57.857 "zone_management": false, 00:06:57.857 "zone_append": false, 00:06:57.857 "compare": false, 00:06:57.857 "compare_and_write": false, 00:06:57.857 "abort": true, 00:06:57.857 "seek_hole": false, 00:06:57.857 "seek_data": false, 00:06:57.857 "copy": true, 00:06:57.857 "nvme_iov_md": false 00:06:57.857 }, 00:06:57.857 "memory_domains": [ 00:06:57.857 { 00:06:57.857 "dma_device_id": "system", 00:06:57.857 "dma_device_type": 1 00:06:57.857 }, 00:06:57.857 { 00:06:57.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:57.857 "dma_device_type": 2 00:06:57.857 } 00:06:57.857 ], 00:06:57.857 "driver_specific": { 00:06:57.857 "passthru": { 00:06:57.857 "name": "Passthru0", 00:06:57.857 "base_bdev_name": "Malloc2" 00:06:57.857 } 00:06:57.857 } 00:06:57.857 } 00:06:57.857 ]' 00:06:57.857 10:55:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:57.857 10:55:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:57.857 10:55:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:57.857 10:55:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.857 10:55:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.857 10:55:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.857 10:55:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:57.857 10:55:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.857 10:55:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.857 10:55:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.857 10:55:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:57.857 10:55:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.857 10:55:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.857 10:55:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.857 10:55:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:57.857 10:55:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:57.857 10:55:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:57.857 00:06:57.857 real 0m0.258s 00:06:57.857 user 0m0.159s 00:06:57.857 sys 0m0.035s 00:06:57.857 10:55:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.857 10:55:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:57.857 ************************************ 00:06:57.857 END TEST rpc_daemon_integrity 00:06:57.857 ************************************ 00:06:58.117 10:55:17 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:58.117 10:55:17 rpc -- rpc/rpc.sh@84 -- # killprocess 1275033 00:06:58.117 10:55:17 rpc -- common/autotest_common.sh@950 -- # '[' -z 1275033 ']' 00:06:58.117 10:55:17 rpc -- common/autotest_common.sh@954 -- # kill -0 1275033 00:06:58.117 10:55:17 rpc -- common/autotest_common.sh@955 -- # uname 00:06:58.117 10:55:17 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:58.117 10:55:17 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1275033 00:06:58.117 10:55:17 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:58.117 10:55:17 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:58.117 10:55:17 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1275033' 00:06:58.117 killing process with pid 1275033 00:06:58.117 10:55:17 rpc -- common/autotest_common.sh@969 -- # kill 1275033 00:06:58.117 10:55:17 rpc -- common/autotest_common.sh@974 -- # wait 1275033 00:06:58.377 00:06:58.377 real 0m2.373s 00:06:58.377 user 0m3.047s 00:06:58.377 sys 0m0.623s 00:06:58.377 10:55:17 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.377 10:55:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.377 ************************************ 00:06:58.377 END TEST rpc 00:06:58.377 ************************************ 00:06:58.377 10:55:17 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:58.377 10:55:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:58.377 10:55:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.377 10:55:17 -- common/autotest_common.sh@10 -- # set +x 00:06:58.377 ************************************ 00:06:58.377 START TEST skip_rpc 00:06:58.377 ************************************ 00:06:58.377 10:55:17 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:58.377 * Looking for test storage... 00:06:58.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:58.377 10:55:17 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:58.377 10:55:17 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:58.377 10:55:17 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:58.377 10:55:17 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:58.377 10:55:17 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.377 10:55:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.637 ************************************ 00:06:58.637 START TEST skip_rpc 00:06:58.637 ************************************ 00:06:58.637 10:55:17 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:58.637 10:55:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1275664 00:06:58.637 10:55:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:58.637 10:55:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:58.637 10:55:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:58.637 [2024-07-26 10:55:17.939745] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:58.637 [2024-07-26 10:55:17.939786] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1275664 ] 00:06:58.637 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.637 [2024-07-26 10:55:17.991615] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.637 [2024-07-26 10:55:18.062806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.012 10:55:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:04.012 10:55:22 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:04.012 10:55:22 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:04.012 10:55:22 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:04.012 10:55:22 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.013 10:55:22 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:04.013 10:55:22 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.013 10:55:22 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:07:04.013 10:55:22 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.013 10:55:22 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.013 10:55:22 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:04.013 10:55:22 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:04.013 10:55:22 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:04.013 10:55:22 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:04.013 10:55:22 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:04.013 10:55:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:04.013 10:55:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1275664 00:07:04.013 10:55:22 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 1275664 ']' 00:07:04.013 10:55:22 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 1275664 00:07:04.013 10:55:22 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:07:04.013 10:55:22 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:04.013 10:55:22 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1275664 00:07:04.013 10:55:22 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:04.013 10:55:22 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:04.013 10:55:22 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1275664' 00:07:04.013 killing process with pid 1275664 00:07:04.013 10:55:22 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 1275664 00:07:04.013 10:55:22 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 1275664 00:07:04.013 00:07:04.013 real 0m5.366s 00:07:04.013 user 0m5.142s 00:07:04.013 sys 0m0.253s 00:07:04.013 10:55:23 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.013 10:55:23 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.013 ************************************ 00:07:04.013 END TEST skip_rpc 00:07:04.013 ************************************ 00:07:04.013 10:55:23 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:04.013 10:55:23 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:04.013 10:55:23 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.013 10:55:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.013 ************************************ 00:07:04.013 START TEST skip_rpc_with_json 00:07:04.013 ************************************ 00:07:04.013 10:55:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:07:04.013 10:55:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:04.013 10:55:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1276615 00:07:04.013 10:55:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:04.013 10:55:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:04.013 10:55:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1276615 00:07:04.013 10:55:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 1276615 ']' 00:07:04.013 10:55:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.013 10:55:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.013 10:55:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.013 10:55:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.013 10:55:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:04.013 [2024-07-26 10:55:23.368821] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:04.013 [2024-07-26 10:55:23.368864] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1276615 ] 00:07:04.013 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.013 [2024-07-26 10:55:23.420001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.013 [2024-07-26 10:55:23.499784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.953 10:55:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:04.953 10:55:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:07:04.953 10:55:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:04.953 10:55:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.953 10:55:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:04.953 [2024-07-26 10:55:24.166329] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:04.953 request: 00:07:04.953 { 00:07:04.953 "trtype": "tcp", 00:07:04.953 "method": "nvmf_get_transports", 00:07:04.953 "req_id": 1 00:07:04.953 } 00:07:04.953 Got JSON-RPC error response 00:07:04.953 response: 00:07:04.953 { 00:07:04.953 "code": -19, 00:07:04.953 "message": "No such device" 00:07:04.953 } 00:07:04.953 10:55:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:04.953 10:55:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:04.953 10:55:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.953 10:55:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:04.953 [2024-07-26 10:55:24.174426] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:04.953 10:55:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.954 10:55:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:04.954 10:55:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.954 10:55:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:04.954 10:55:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.954 10:55:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:04.954 { 00:07:04.954 "subsystems": [ 00:07:04.954 { 00:07:04.954 "subsystem": "vfio_user_target", 00:07:04.954 "config": null 00:07:04.954 }, 00:07:04.954 { 00:07:04.954 "subsystem": "keyring", 00:07:04.954 "config": [] 00:07:04.954 }, 00:07:04.954 { 00:07:04.954 "subsystem": "iobuf", 00:07:04.954 "config": [ 00:07:04.954 { 00:07:04.954 "method": "iobuf_set_options", 00:07:04.954 "params": { 00:07:04.954 "small_pool_count": 8192, 00:07:04.954 "large_pool_count": 1024, 00:07:04.954 "small_bufsize": 8192, 00:07:04.954 "large_bufsize": 135168 00:07:04.954 } 00:07:04.954 } 00:07:04.954 ] 00:07:04.954 }, 00:07:04.954 { 00:07:04.954 "subsystem": "sock", 00:07:04.954 "config": [ 00:07:04.954 { 00:07:04.954 "method": "sock_set_default_impl", 00:07:04.954 "params": { 00:07:04.954 "impl_name": "posix" 00:07:04.954 } 00:07:04.954 }, 00:07:04.954 { 00:07:04.954 "method": "sock_impl_set_options", 00:07:04.954 "params": { 00:07:04.954 "impl_name": "ssl", 00:07:04.954 "recv_buf_size": 4096, 00:07:04.954 "send_buf_size": 4096, 00:07:04.954 "enable_recv_pipe": true, 00:07:04.954 "enable_quickack": false, 00:07:04.954 "enable_placement_id": 0, 00:07:04.954 "enable_zerocopy_send_server": true, 00:07:04.954 "enable_zerocopy_send_client": false, 00:07:04.954 "zerocopy_threshold": 0, 00:07:04.954 "tls_version": 0, 00:07:04.954 "enable_ktls": false 00:07:04.954 } 00:07:04.954 }, 00:07:04.954 { 00:07:04.954 "method": "sock_impl_set_options", 00:07:04.954 "params": { 00:07:04.954 "impl_name": "posix", 00:07:04.954 "recv_buf_size": 2097152, 00:07:04.954 "send_buf_size": 2097152, 00:07:04.954 "enable_recv_pipe": true, 00:07:04.954 "enable_quickack": false, 00:07:04.954 "enable_placement_id": 0, 00:07:04.954 "enable_zerocopy_send_server": true, 00:07:04.954 "enable_zerocopy_send_client": false, 00:07:04.954 "zerocopy_threshold": 0, 00:07:04.954 "tls_version": 0, 00:07:04.954 "enable_ktls": false 00:07:04.954 } 00:07:04.954 } 00:07:04.954 ] 00:07:04.954 }, 00:07:04.954 { 00:07:04.954 "subsystem": "vmd", 00:07:04.954 "config": [] 00:07:04.954 }, 00:07:04.954 { 00:07:04.954 "subsystem": "accel", 00:07:04.954 "config": [ 00:07:04.954 { 00:07:04.954 "method": "accel_set_options", 00:07:04.954 "params": { 00:07:04.954 "small_cache_size": 128, 00:07:04.954 "large_cache_size": 16, 00:07:04.954 "task_count": 2048, 00:07:04.954 "sequence_count": 2048, 00:07:04.954 "buf_count": 2048 00:07:04.954 } 00:07:04.954 } 00:07:04.954 ] 00:07:04.954 }, 00:07:04.954 { 00:07:04.954 "subsystem": "bdev", 00:07:04.954 "config": [ 00:07:04.954 { 00:07:04.954 "method": "bdev_set_options", 00:07:04.954 "params": { 00:07:04.954 "bdev_io_pool_size": 65535, 00:07:04.954 "bdev_io_cache_size": 256, 00:07:04.954 "bdev_auto_examine": true, 00:07:04.954 "iobuf_small_cache_size": 128, 00:07:04.954 "iobuf_large_cache_size": 16 00:07:04.954 } 00:07:04.954 }, 00:07:04.954 { 00:07:04.954 "method": "bdev_raid_set_options", 00:07:04.954 "params": { 00:07:04.954 "process_window_size_kb": 1024, 00:07:04.954 "process_max_bandwidth_mb_sec": 0 00:07:04.954 } 00:07:04.954 }, 00:07:04.954 { 00:07:04.954 "method": "bdev_iscsi_set_options", 00:07:04.954 "params": { 00:07:04.954 "timeout_sec": 30 00:07:04.954 } 00:07:04.954 }, 00:07:04.954 { 00:07:04.954 "method": "bdev_nvme_set_options", 00:07:04.954 "params": { 00:07:04.954 "action_on_timeout": "none", 00:07:04.954 "timeout_us": 0, 00:07:04.954 "timeout_admin_us": 0, 00:07:04.954 "keep_alive_timeout_ms": 10000, 00:07:04.954 "arbitration_burst": 0, 00:07:04.954 "low_priority_weight": 0, 00:07:04.954 "medium_priority_weight": 0, 00:07:04.954 "high_priority_weight": 0, 00:07:04.954 "nvme_adminq_poll_period_us": 10000, 00:07:04.954 "nvme_ioq_poll_period_us": 0, 00:07:04.954 "io_queue_requests": 0, 00:07:04.954 "delay_cmd_submit": true, 00:07:04.954 "transport_retry_count": 4, 00:07:04.954 "bdev_retry_count": 3, 00:07:04.954 "transport_ack_timeout": 0, 00:07:04.954 "ctrlr_loss_timeout_sec": 0, 00:07:04.954 "reconnect_delay_sec": 0, 00:07:04.954 "fast_io_fail_timeout_sec": 0, 00:07:04.954 "disable_auto_failback": false, 00:07:04.954 "generate_uuids": false, 00:07:04.954 "transport_tos": 0, 00:07:04.954 "nvme_error_stat": false, 00:07:04.954 "rdma_srq_size": 0, 00:07:04.954 "io_path_stat": false, 00:07:04.954 "allow_accel_sequence": false, 00:07:04.954 "rdma_max_cq_size": 0, 00:07:04.954 "rdma_cm_event_timeout_ms": 0, 00:07:04.954 "dhchap_digests": [ 00:07:04.954 "sha256", 00:07:04.954 "sha384", 00:07:04.954 "sha512" 00:07:04.954 ], 00:07:04.954 "dhchap_dhgroups": [ 00:07:04.954 "null", 00:07:04.954 "ffdhe2048", 00:07:04.954 "ffdhe3072", 00:07:04.954 "ffdhe4096", 00:07:04.954 "ffdhe6144", 00:07:04.954 "ffdhe8192" 00:07:04.954 ] 00:07:04.954 } 00:07:04.954 }, 00:07:04.954 { 00:07:04.954 "method": "bdev_nvme_set_hotplug", 00:07:04.954 "params": { 00:07:04.954 "period_us": 100000, 00:07:04.954 "enable": false 00:07:04.954 } 00:07:04.954 }, 00:07:04.954 { 00:07:04.954 "method": "bdev_wait_for_examine" 00:07:04.954 } 00:07:04.954 ] 00:07:04.954 }, 00:07:04.954 { 00:07:04.954 "subsystem": "scsi", 00:07:04.954 "config": null 00:07:04.954 }, 00:07:04.954 { 00:07:04.954 "subsystem": "scheduler", 00:07:04.954 "config": [ 00:07:04.954 { 00:07:04.954 "method": "framework_set_scheduler", 00:07:04.954 "params": { 00:07:04.954 "name": "static" 00:07:04.954 } 00:07:04.954 } 00:07:04.954 ] 00:07:04.954 }, 00:07:04.954 { 00:07:04.954 "subsystem": "vhost_scsi", 00:07:04.954 "config": [] 00:07:04.954 }, 00:07:04.954 { 00:07:04.954 "subsystem": "vhost_blk", 00:07:04.954 "config": [] 00:07:04.954 }, 00:07:04.954 { 00:07:04.954 "subsystem": "ublk", 00:07:04.954 "config": [] 00:07:04.954 }, 00:07:04.954 { 00:07:04.954 "subsystem": "nbd", 00:07:04.954 "config": [] 00:07:04.954 }, 00:07:04.954 { 00:07:04.954 "subsystem": "nvmf", 00:07:04.954 "config": [ 00:07:04.954 { 00:07:04.954 "method": "nvmf_set_config", 00:07:04.954 "params": { 00:07:04.954 "discovery_filter": "match_any", 00:07:04.954 "admin_cmd_passthru": { 00:07:04.954 "identify_ctrlr": false 00:07:04.954 } 00:07:04.954 } 00:07:04.954 }, 00:07:04.954 { 00:07:04.954 "method": "nvmf_set_max_subsystems", 00:07:04.954 "params": { 00:07:04.954 "max_subsystems": 1024 00:07:04.954 } 00:07:04.954 }, 00:07:04.954 { 00:07:04.954 "method": "nvmf_set_crdt", 00:07:04.954 "params": { 00:07:04.954 "crdt1": 0, 00:07:04.954 "crdt2": 0, 00:07:04.954 "crdt3": 0 00:07:04.954 } 00:07:04.954 }, 00:07:04.954 { 00:07:04.954 "method": "nvmf_create_transport", 00:07:04.954 "params": { 00:07:04.954 "trtype": "TCP", 00:07:04.954 "max_queue_depth": 128, 00:07:04.954 "max_io_qpairs_per_ctrlr": 127, 00:07:04.954 "in_capsule_data_size": 4096, 00:07:04.954 "max_io_size": 131072, 00:07:04.954 "io_unit_size": 131072, 00:07:04.954 "max_aq_depth": 128, 00:07:04.954 "num_shared_buffers": 511, 00:07:04.954 "buf_cache_size": 4294967295, 00:07:04.954 "dif_insert_or_strip": false, 00:07:04.954 "zcopy": false, 00:07:04.954 "c2h_success": true, 00:07:04.954 "sock_priority": 0, 00:07:04.954 "abort_timeout_sec": 1, 00:07:04.954 "ack_timeout": 0, 00:07:04.954 "data_wr_pool_size": 0 00:07:04.954 } 00:07:04.954 } 00:07:04.954 ] 00:07:04.954 }, 00:07:04.954 { 00:07:04.954 "subsystem": "iscsi", 00:07:04.954 "config": [ 00:07:04.954 { 00:07:04.954 "method": "iscsi_set_options", 00:07:04.954 "params": { 00:07:04.954 "node_base": "iqn.2016-06.io.spdk", 00:07:04.954 "max_sessions": 128, 00:07:04.954 "max_connections_per_session": 2, 00:07:04.954 "max_queue_depth": 64, 00:07:04.954 "default_time2wait": 2, 00:07:04.954 "default_time2retain": 20, 00:07:04.954 "first_burst_length": 8192, 00:07:04.954 "immediate_data": true, 00:07:04.954 "allow_duplicated_isid": false, 00:07:04.954 "error_recovery_level": 0, 00:07:04.954 "nop_timeout": 60, 00:07:04.954 "nop_in_interval": 30, 00:07:04.954 "disable_chap": false, 00:07:04.954 "require_chap": false, 00:07:04.954 "mutual_chap": false, 00:07:04.954 "chap_group": 0, 00:07:04.954 "max_large_datain_per_connection": 64, 00:07:04.954 "max_r2t_per_connection": 4, 00:07:04.955 "pdu_pool_size": 36864, 00:07:04.955 "immediate_data_pool_size": 16384, 00:07:04.955 "data_out_pool_size": 2048 00:07:04.955 } 00:07:04.955 } 00:07:04.955 ] 00:07:04.955 } 00:07:04.955 ] 00:07:04.955 } 00:07:04.955 10:55:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:04.955 10:55:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1276615 00:07:04.955 10:55:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1276615 ']' 00:07:04.955 10:55:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1276615 00:07:04.955 10:55:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:07:04.955 10:55:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:04.955 10:55:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1276615 00:07:04.955 10:55:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:04.955 10:55:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:04.955 10:55:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1276615' 00:07:04.955 killing process with pid 1276615 00:07:04.955 10:55:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1276615 00:07:04.955 10:55:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1276615 00:07:05.216 10:55:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:05.216 10:55:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1276854 00:07:05.216 10:55:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:10.537 10:55:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1276854 00:07:10.537 10:55:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1276854 ']' 00:07:10.537 10:55:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1276854 00:07:10.537 10:55:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:07:10.537 10:55:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:10.537 10:55:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1276854 00:07:10.537 10:55:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:10.537 10:55:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:10.537 10:55:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1276854' 00:07:10.537 killing process with pid 1276854 00:07:10.537 10:55:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1276854 00:07:10.537 10:55:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1276854 00:07:10.798 10:55:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:10.798 10:55:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:10.798 00:07:10.798 real 0m6.729s 00:07:10.798 user 0m6.572s 00:07:10.798 sys 0m0.563s 00:07:10.798 10:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.798 10:55:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:10.798 ************************************ 00:07:10.798 END TEST skip_rpc_with_json 00:07:10.798 ************************************ 00:07:10.798 10:55:30 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:10.798 10:55:30 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.798 10:55:30 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.798 10:55:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.798 ************************************ 00:07:10.798 START TEST skip_rpc_with_delay 00:07:10.798 ************************************ 00:07:10.798 10:55:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:07:10.798 10:55:30 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:10.798 10:55:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:07:10.798 10:55:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:10.798 10:55:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:10.798 10:55:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:10.798 10:55:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:10.798 10:55:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:10.799 10:55:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:10.799 10:55:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:10.799 10:55:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:10.799 10:55:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:10.799 10:55:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:10.799 [2024-07-26 10:55:30.170091] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:10.799 [2024-07-26 10:55:30.170151] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:07:10.799 10:55:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:07:10.799 10:55:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:10.799 10:55:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:10.799 10:55:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:10.799 00:07:10.799 real 0m0.067s 00:07:10.799 user 0m0.046s 00:07:10.799 sys 0m0.021s 00:07:10.799 10:55:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.799 10:55:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:10.799 ************************************ 00:07:10.799 END TEST skip_rpc_with_delay 00:07:10.799 ************************************ 00:07:10.799 10:55:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:10.799 10:55:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:10.799 10:55:30 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:10.799 10:55:30 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.799 10:55:30 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.799 10:55:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.799 ************************************ 00:07:10.799 START TEST exit_on_failed_rpc_init 00:07:10.799 ************************************ 00:07:10.799 10:55:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:07:10.799 10:55:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1277833 00:07:10.799 10:55:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1277833 00:07:10.799 10:55:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:10.799 10:55:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 1277833 ']' 00:07:10.799 10:55:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.799 10:55:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:10.799 10:55:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.799 10:55:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:10.799 10:55:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:11.061 [2024-07-26 10:55:30.298037] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:11.061 [2024-07-26 10:55:30.298083] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1277833 ] 00:07:11.061 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.061 [2024-07-26 10:55:30.352269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.061 [2024-07-26 10:55:30.423556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.632 10:55:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:11.633 10:55:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:07:11.633 10:55:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:11.633 10:55:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:11.633 10:55:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:07:11.633 10:55:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:11.633 10:55:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:11.633 10:55:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:11.633 10:55:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:11.633 10:55:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:11.633 10:55:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:11.633 10:55:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:11.633 10:55:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:11.633 10:55:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:11.633 10:55:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:11.892 [2024-07-26 10:55:31.138192] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:11.892 [2024-07-26 10:55:31.138241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1278035 ] 00:07:11.892 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.892 [2024-07-26 10:55:31.193224] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.892 [2024-07-26 10:55:31.268628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.892 [2024-07-26 10:55:31.268696] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:11.892 [2024-07-26 10:55:31.268705] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:11.892 [2024-07-26 10:55:31.268711] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:11.892 10:55:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:07:11.892 10:55:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:11.892 10:55:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:07:11.892 10:55:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:07:11.892 10:55:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:07:11.892 10:55:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:11.892 10:55:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:11.892 10:55:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1277833 00:07:11.892 10:55:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 1277833 ']' 00:07:11.892 10:55:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 1277833 00:07:11.892 10:55:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:07:11.892 10:55:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:11.892 10:55:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1277833 00:07:11.892 10:55:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:11.892 10:55:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:11.893 10:55:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1277833' 00:07:11.893 killing process with pid 1277833 00:07:11.893 10:55:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 1277833 00:07:11.893 10:55:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 1277833 00:07:12.464 00:07:12.464 real 0m1.445s 00:07:12.464 user 0m1.672s 00:07:12.464 sys 0m0.385s 00:07:12.464 10:55:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.464 10:55:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:12.464 ************************************ 00:07:12.464 END TEST exit_on_failed_rpc_init 00:07:12.464 ************************************ 00:07:12.464 10:55:31 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:12.464 00:07:12.464 real 0m13.954s 00:07:12.464 user 0m13.574s 00:07:12.464 sys 0m1.451s 00:07:12.464 10:55:31 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.464 10:55:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.464 ************************************ 00:07:12.464 END TEST skip_rpc 00:07:12.464 ************************************ 00:07:12.464 10:55:31 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:12.464 10:55:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:12.464 10:55:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.464 10:55:31 -- common/autotest_common.sh@10 -- # set +x 00:07:12.464 ************************************ 00:07:12.464 START TEST rpc_client 00:07:12.464 ************************************ 00:07:12.464 10:55:31 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:12.464 * Looking for test storage... 00:07:12.464 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:07:12.464 10:55:31 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:07:12.464 OK 00:07:12.464 10:55:31 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:12.464 00:07:12.464 real 0m0.094s 00:07:12.464 user 0m0.046s 00:07:12.464 sys 0m0.054s 00:07:12.464 10:55:31 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.464 10:55:31 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:12.464 ************************************ 00:07:12.464 END TEST rpc_client 00:07:12.464 ************************************ 00:07:12.464 10:55:31 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:07:12.464 10:55:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:12.464 10:55:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.464 10:55:31 -- common/autotest_common.sh@10 -- # set +x 00:07:12.464 ************************************ 00:07:12.464 START TEST json_config 00:07:12.464 ************************************ 00:07:12.464 10:55:31 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:07:12.724 10:55:32 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:12.724 10:55:32 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:12.724 10:55:32 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:12.724 10:55:32 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:12.724 10:55:32 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:12.724 10:55:32 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:12.725 10:55:32 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:12.725 10:55:32 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:12.725 10:55:32 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:12.725 10:55:32 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:12.725 10:55:32 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:12.725 10:55:32 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:12.725 10:55:32 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:12.725 10:55:32 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:12.725 10:55:32 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:12.725 10:55:32 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:12.725 10:55:32 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:12.725 10:55:32 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:12.725 10:55:32 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:12.725 10:55:32 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:12.725 10:55:32 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:12.725 10:55:32 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:12.725 10:55:32 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.725 10:55:32 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.725 10:55:32 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.725 10:55:32 json_config -- paths/export.sh@5 -- # export PATH 00:07:12.725 10:55:32 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.725 10:55:32 json_config -- nvmf/common.sh@47 -- # : 0 00:07:12.725 10:55:32 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:12.725 10:55:32 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:12.725 10:55:32 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:12.725 10:55:32 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:12.725 10:55:32 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:12.725 10:55:32 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:12.725 10:55:32 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:12.725 10:55:32 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:12.725 10:55:32 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:07:12.725 10:55:32 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:12.725 10:55:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:12.725 10:55:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:12.725 10:55:32 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:12.725 10:55:32 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:07:12.725 10:55:32 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:07:12.725 10:55:32 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:12.725 10:55:32 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:07:12.725 10:55:32 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:12.725 10:55:32 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:07:12.725 10:55:32 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:07:12.725 10:55:32 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:07:12.725 10:55:32 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:07:12.725 10:55:32 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:12.725 10:55:32 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:07:12.725 INFO: JSON configuration test init 00:07:12.725 10:55:32 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:07:12.725 10:55:32 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:07:12.725 10:55:32 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:12.725 10:55:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:12.725 10:55:32 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:07:12.725 10:55:32 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:12.725 10:55:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:12.725 10:55:32 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:07:12.725 10:55:32 json_config -- json_config/common.sh@9 -- # local app=target 00:07:12.725 10:55:32 json_config -- json_config/common.sh@10 -- # shift 00:07:12.725 10:55:32 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:12.725 10:55:32 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:12.725 10:55:32 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:12.725 10:55:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:12.725 10:55:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:12.725 10:55:32 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1278185 00:07:12.725 10:55:32 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:12.725 10:55:32 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:12.725 Waiting for target to run... 00:07:12.725 10:55:32 json_config -- json_config/common.sh@25 -- # waitforlisten 1278185 /var/tmp/spdk_tgt.sock 00:07:12.725 10:55:32 json_config -- common/autotest_common.sh@831 -- # '[' -z 1278185 ']' 00:07:12.725 10:55:32 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:12.725 10:55:32 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:12.725 10:55:32 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:12.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:12.725 10:55:32 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:12.725 10:55:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:12.725 [2024-07-26 10:55:32.093509] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:12.725 [2024-07-26 10:55:32.093560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1278185 ] 00:07:12.725 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.986 [2024-07-26 10:55:32.360115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.986 [2024-07-26 10:55:32.429256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.557 10:55:32 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:13.557 10:55:32 json_config -- common/autotest_common.sh@864 -- # return 0 00:07:13.557 10:55:32 json_config -- json_config/common.sh@26 -- # echo '' 00:07:13.557 00:07:13.557 10:55:32 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:07:13.557 10:55:32 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:07:13.557 10:55:32 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:13.557 10:55:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:13.557 10:55:32 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:07:13.557 10:55:32 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:07:13.557 10:55:32 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:13.557 10:55:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:13.557 10:55:32 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:13.557 10:55:32 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:07:13.557 10:55:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:16.861 10:55:36 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:07:16.861 10:55:36 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:07:16.861 10:55:36 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:16.861 10:55:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:16.861 10:55:36 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:07:16.861 10:55:36 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:16.861 10:55:36 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:07:16.861 10:55:36 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:07:16.861 10:55:36 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:07:16.861 10:55:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:16.861 10:55:36 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:07:16.861 10:55:36 json_config -- json_config/json_config.sh@48 -- # local get_types 00:07:16.861 10:55:36 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:07:16.861 10:55:36 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:07:16.861 10:55:36 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:07:16.861 10:55:36 json_config -- json_config/json_config.sh@51 -- # sort 00:07:16.861 10:55:36 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:07:16.861 10:55:36 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:07:16.861 10:55:36 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:07:16.861 10:55:36 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:07:16.861 10:55:36 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:16.861 10:55:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:16.861 10:55:36 json_config -- json_config/json_config.sh@59 -- # return 0 00:07:16.861 10:55:36 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:07:16.861 10:55:36 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:07:16.861 10:55:36 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:07:16.861 10:55:36 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:07:16.861 10:55:36 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:07:16.861 10:55:36 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:07:16.861 10:55:36 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:16.861 10:55:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:16.861 10:55:36 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:07:16.861 10:55:36 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:07:16.861 10:55:36 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:07:16.861 10:55:36 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:16.861 10:55:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:17.122 MallocForNvmf0 00:07:17.122 10:55:36 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:17.122 10:55:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:17.122 MallocForNvmf1 00:07:17.122 10:55:36 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:07:17.122 10:55:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:07:17.383 [2024-07-26 10:55:36.736544] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:17.383 10:55:36 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:17.383 10:55:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:17.643 10:55:36 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:17.643 10:55:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:17.643 10:55:37 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:17.643 10:55:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:17.904 10:55:37 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:17.904 10:55:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:17.904 [2024-07-26 10:55:37.382588] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:17.904 10:55:37 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:07:17.904 10:55:37 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:17.904 10:55:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:18.165 10:55:37 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:07:18.165 10:55:37 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:18.165 10:55:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:18.165 10:55:37 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:07:18.165 10:55:37 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:18.165 10:55:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:18.165 MallocBdevForConfigChangeCheck 00:07:18.165 10:55:37 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:07:18.165 10:55:37 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:18.165 10:55:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:18.425 10:55:37 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:07:18.425 10:55:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:18.686 10:55:37 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:07:18.686 INFO: shutting down applications... 00:07:18.686 10:55:37 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:07:18.686 10:55:37 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:07:18.686 10:55:37 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:07:18.686 10:55:37 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:20.070 Calling clear_iscsi_subsystem 00:07:20.070 Calling clear_nvmf_subsystem 00:07:20.070 Calling clear_nbd_subsystem 00:07:20.070 Calling clear_ublk_subsystem 00:07:20.070 Calling clear_vhost_blk_subsystem 00:07:20.070 Calling clear_vhost_scsi_subsystem 00:07:20.070 Calling clear_bdev_subsystem 00:07:20.070 10:55:39 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:07:20.070 10:55:39 json_config -- json_config/json_config.sh@347 -- # count=100 00:07:20.070 10:55:39 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:07:20.070 10:55:39 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:20.070 10:55:39 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:20.070 10:55:39 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:07:20.331 10:55:39 json_config -- json_config/json_config.sh@349 -- # break 00:07:20.331 10:55:39 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:07:20.331 10:55:39 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:07:20.331 10:55:39 json_config -- json_config/common.sh@31 -- # local app=target 00:07:20.331 10:55:39 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:20.331 10:55:39 json_config -- json_config/common.sh@35 -- # [[ -n 1278185 ]] 00:07:20.331 10:55:39 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1278185 00:07:20.331 10:55:39 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:20.331 10:55:39 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:20.331 10:55:39 json_config -- json_config/common.sh@41 -- # kill -0 1278185 00:07:20.331 10:55:39 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:20.900 10:55:40 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:20.900 10:55:40 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:20.900 10:55:40 json_config -- json_config/common.sh@41 -- # kill -0 1278185 00:07:20.900 10:55:40 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:20.900 10:55:40 json_config -- json_config/common.sh@43 -- # break 00:07:20.900 10:55:40 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:20.900 10:55:40 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:20.900 SPDK target shutdown done 00:07:20.900 10:55:40 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:07:20.900 INFO: relaunching applications... 00:07:20.900 10:55:40 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:20.900 10:55:40 json_config -- json_config/common.sh@9 -- # local app=target 00:07:20.900 10:55:40 json_config -- json_config/common.sh@10 -- # shift 00:07:20.900 10:55:40 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:20.900 10:55:40 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:20.900 10:55:40 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:20.900 10:55:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:20.901 10:55:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:20.901 10:55:40 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1279703 00:07:20.901 10:55:40 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:20.901 Waiting for target to run... 00:07:20.901 10:55:40 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:20.901 10:55:40 json_config -- json_config/common.sh@25 -- # waitforlisten 1279703 /var/tmp/spdk_tgt.sock 00:07:20.901 10:55:40 json_config -- common/autotest_common.sh@831 -- # '[' -z 1279703 ']' 00:07:20.901 10:55:40 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:20.901 10:55:40 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:20.901 10:55:40 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:20.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:20.901 10:55:40 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:20.901 10:55:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:20.901 [2024-07-26 10:55:40.384440] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:20.901 [2024-07-26 10:55:40.384494] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1279703 ] 00:07:21.161 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.421 [2024-07-26 10:55:40.671105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.421 [2024-07-26 10:55:40.739620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.722 [2024-07-26 10:55:43.750885] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:24.722 [2024-07-26 10:55:43.783216] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:24.722 10:55:43 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:24.722 10:55:43 json_config -- common/autotest_common.sh@864 -- # return 0 00:07:24.722 10:55:43 json_config -- json_config/common.sh@26 -- # echo '' 00:07:24.722 00:07:24.722 10:55:43 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:07:24.722 10:55:43 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:24.723 INFO: Checking if target configuration is the same... 00:07:24.723 10:55:43 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:24.723 10:55:43 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:07:24.723 10:55:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:24.723 + '[' 2 -ne 2 ']' 00:07:24.723 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:24.723 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:07:24.723 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:24.723 +++ basename /dev/fd/62 00:07:24.723 ++ mktemp /tmp/62.XXX 00:07:24.723 + tmp_file_1=/tmp/62.vOW 00:07:24.723 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:24.723 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:24.723 + tmp_file_2=/tmp/spdk_tgt_config.json.jiK 00:07:24.723 + ret=0 00:07:24.723 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:24.723 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:24.723 + diff -u /tmp/62.vOW /tmp/spdk_tgt_config.json.jiK 00:07:24.723 + echo 'INFO: JSON config files are the same' 00:07:24.723 INFO: JSON config files are the same 00:07:24.723 + rm /tmp/62.vOW /tmp/spdk_tgt_config.json.jiK 00:07:24.723 + exit 0 00:07:24.723 10:55:44 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:07:24.723 10:55:44 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:24.723 INFO: changing configuration and checking if this can be detected... 00:07:24.723 10:55:44 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:24.723 10:55:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:24.983 10:55:44 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:24.983 10:55:44 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:07:24.983 10:55:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:24.983 + '[' 2 -ne 2 ']' 00:07:24.983 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:24.983 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:07:24.983 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:24.983 +++ basename /dev/fd/62 00:07:24.983 ++ mktemp /tmp/62.XXX 00:07:24.983 + tmp_file_1=/tmp/62.V3p 00:07:24.983 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:24.983 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:24.983 + tmp_file_2=/tmp/spdk_tgt_config.json.ecL 00:07:24.983 + ret=0 00:07:24.983 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:25.244 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:25.244 + diff -u /tmp/62.V3p /tmp/spdk_tgt_config.json.ecL 00:07:25.244 + ret=1 00:07:25.244 + echo '=== Start of file: /tmp/62.V3p ===' 00:07:25.244 + cat /tmp/62.V3p 00:07:25.244 + echo '=== End of file: /tmp/62.V3p ===' 00:07:25.244 + echo '' 00:07:25.244 + echo '=== Start of file: /tmp/spdk_tgt_config.json.ecL ===' 00:07:25.244 + cat /tmp/spdk_tgt_config.json.ecL 00:07:25.244 + echo '=== End of file: /tmp/spdk_tgt_config.json.ecL ===' 00:07:25.244 + echo '' 00:07:25.244 + rm /tmp/62.V3p /tmp/spdk_tgt_config.json.ecL 00:07:25.244 + exit 1 00:07:25.244 10:55:44 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:07:25.244 INFO: configuration change detected. 00:07:25.244 10:55:44 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:07:25.244 10:55:44 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:07:25.244 10:55:44 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:25.244 10:55:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:25.244 10:55:44 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:07:25.244 10:55:44 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:07:25.244 10:55:44 json_config -- json_config/json_config.sh@321 -- # [[ -n 1279703 ]] 00:07:25.244 10:55:44 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:07:25.244 10:55:44 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:07:25.244 10:55:44 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:25.244 10:55:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:25.244 10:55:44 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:07:25.244 10:55:44 json_config -- json_config/json_config.sh@197 -- # uname -s 00:07:25.244 10:55:44 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:07:25.244 10:55:44 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:07:25.244 10:55:44 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:07:25.244 10:55:44 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:07:25.244 10:55:44 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:25.244 10:55:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:25.505 10:55:44 json_config -- json_config/json_config.sh@327 -- # killprocess 1279703 00:07:25.505 10:55:44 json_config -- common/autotest_common.sh@950 -- # '[' -z 1279703 ']' 00:07:25.505 10:55:44 json_config -- common/autotest_common.sh@954 -- # kill -0 1279703 00:07:25.505 10:55:44 json_config -- common/autotest_common.sh@955 -- # uname 00:07:25.505 10:55:44 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:25.505 10:55:44 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1279703 00:07:25.505 10:55:44 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:25.505 10:55:44 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:25.505 10:55:44 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1279703' 00:07:25.505 killing process with pid 1279703 00:07:25.505 10:55:44 json_config -- common/autotest_common.sh@969 -- # kill 1279703 00:07:25.505 10:55:44 json_config -- common/autotest_common.sh@974 -- # wait 1279703 00:07:26.958 10:55:46 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:26.958 10:55:46 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:07:26.958 10:55:46 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:26.958 10:55:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:26.958 10:55:46 json_config -- json_config/json_config.sh@332 -- # return 0 00:07:26.958 10:55:46 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:07:26.958 INFO: Success 00:07:26.959 00:07:26.959 real 0m14.412s 00:07:26.959 user 0m15.210s 00:07:26.959 sys 0m1.557s 00:07:26.959 10:55:46 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.959 10:55:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:26.959 ************************************ 00:07:26.959 END TEST json_config 00:07:26.959 ************************************ 00:07:26.959 10:55:46 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:26.959 10:55:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:26.959 10:55:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.959 10:55:46 -- common/autotest_common.sh@10 -- # set +x 00:07:26.959 ************************************ 00:07:26.959 START TEST json_config_extra_key 00:07:26.959 ************************************ 00:07:26.959 10:55:46 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:27.220 10:55:46 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:27.220 10:55:46 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:27.220 10:55:46 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:27.220 10:55:46 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:27.220 10:55:46 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:27.220 10:55:46 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:27.220 10:55:46 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:27.220 10:55:46 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:27.220 10:55:46 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:27.220 10:55:46 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:27.220 10:55:46 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:27.220 10:55:46 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:27.220 10:55:46 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:27.220 10:55:46 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:27.220 10:55:46 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:27.220 10:55:46 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:27.220 10:55:46 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:27.220 10:55:46 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:27.220 10:55:46 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:27.220 10:55:46 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:27.220 10:55:46 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:27.220 10:55:46 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:27.220 10:55:46 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.220 10:55:46 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.220 10:55:46 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.220 10:55:46 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:27.220 10:55:46 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.220 10:55:46 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:07:27.220 10:55:46 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:27.220 10:55:46 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:27.220 10:55:46 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:27.220 10:55:46 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:27.220 10:55:46 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:27.220 10:55:46 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:27.220 10:55:46 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:27.220 10:55:46 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:27.220 10:55:46 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:07:27.220 10:55:46 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:27.220 10:55:46 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:27.220 10:55:46 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:27.220 10:55:46 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:27.220 10:55:46 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:27.220 10:55:46 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:27.220 10:55:46 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:07:27.220 10:55:46 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:27.220 10:55:46 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:27.220 10:55:46 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:27.220 INFO: launching applications... 00:07:27.220 10:55:46 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:07:27.220 10:55:46 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:27.220 10:55:46 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:27.220 10:55:46 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:27.220 10:55:46 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:27.220 10:55:46 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:27.220 10:55:46 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:27.220 10:55:46 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:27.220 10:55:46 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1280958 00:07:27.220 10:55:46 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:27.220 Waiting for target to run... 00:07:27.220 10:55:46 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1280958 /var/tmp/spdk_tgt.sock 00:07:27.220 10:55:46 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 1280958 ']' 00:07:27.220 10:55:46 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:07:27.220 10:55:46 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:27.220 10:55:46 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.220 10:55:46 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:27.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:27.220 10:55:46 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.220 10:55:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:27.220 [2024-07-26 10:55:46.573222] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:27.220 [2024-07-26 10:55:46.573273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1280958 ] 00:07:27.220 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.481 [2024-07-26 10:55:46.839444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.481 [2024-07-26 10:55:46.907637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.051 10:55:47 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:28.051 10:55:47 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:07:28.051 10:55:47 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:28.051 00:07:28.051 10:55:47 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:28.051 INFO: shutting down applications... 00:07:28.051 10:55:47 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:28.051 10:55:47 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:28.051 10:55:47 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:28.051 10:55:47 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1280958 ]] 00:07:28.051 10:55:47 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1280958 00:07:28.051 10:55:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:28.051 10:55:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:28.051 10:55:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1280958 00:07:28.051 10:55:47 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:28.620 10:55:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:28.620 10:55:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:28.620 10:55:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1280958 00:07:28.620 10:55:47 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:28.621 10:55:47 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:28.621 10:55:47 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:28.621 10:55:47 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:28.621 SPDK target shutdown done 00:07:28.621 10:55:47 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:28.621 Success 00:07:28.621 00:07:28.621 real 0m1.448s 00:07:28.621 user 0m1.255s 00:07:28.621 sys 0m0.351s 00:07:28.621 10:55:47 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.621 10:55:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:28.621 ************************************ 00:07:28.621 END TEST json_config_extra_key 00:07:28.621 ************************************ 00:07:28.621 10:55:47 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:28.621 10:55:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:28.621 10:55:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.621 10:55:47 -- common/autotest_common.sh@10 -- # set +x 00:07:28.621 ************************************ 00:07:28.621 START TEST alias_rpc 00:07:28.621 ************************************ 00:07:28.621 10:55:47 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:28.621 * Looking for test storage... 00:07:28.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:07:28.621 10:55:48 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:28.621 10:55:48 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1281236 00:07:28.621 10:55:48 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1281236 00:07:28.621 10:55:48 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 1281236 ']' 00:07:28.621 10:55:48 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.621 10:55:48 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:28.621 10:55:48 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.621 10:55:48 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:28.621 10:55:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.621 10:55:48 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:28.621 [2024-07-26 10:55:48.080854] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:28.621 [2024-07-26 10:55:48.080908] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1281236 ] 00:07:28.621 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.881 [2024-07-26 10:55:48.133746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.881 [2024-07-26 10:55:48.213651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.451 10:55:48 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:29.451 10:55:48 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:29.451 10:55:48 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:07:29.712 10:55:49 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1281236 00:07:29.712 10:55:49 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 1281236 ']' 00:07:29.712 10:55:49 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 1281236 00:07:29.712 10:55:49 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:07:29.712 10:55:49 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:29.712 10:55:49 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1281236 00:07:29.712 10:55:49 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:29.712 10:55:49 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:29.712 10:55:49 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1281236' 00:07:29.712 killing process with pid 1281236 00:07:29.712 10:55:49 alias_rpc -- common/autotest_common.sh@969 -- # kill 1281236 00:07:29.712 10:55:49 alias_rpc -- common/autotest_common.sh@974 -- # wait 1281236 00:07:29.972 00:07:29.972 real 0m1.475s 00:07:29.972 user 0m1.607s 00:07:29.972 sys 0m0.401s 00:07:29.972 10:55:49 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.972 10:55:49 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.972 ************************************ 00:07:29.972 END TEST alias_rpc 00:07:29.972 ************************************ 00:07:29.972 10:55:49 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:07:29.972 10:55:49 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:29.972 10:55:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:29.972 10:55:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.972 10:55:49 -- common/autotest_common.sh@10 -- # set +x 00:07:30.233 ************************************ 00:07:30.233 START TEST spdkcli_tcp 00:07:30.233 ************************************ 00:07:30.233 10:55:49 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:30.233 * Looking for test storage... 00:07:30.233 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:07:30.233 10:55:49 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:07:30.233 10:55:49 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:07:30.233 10:55:49 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:07:30.233 10:55:49 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:30.233 10:55:49 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:30.233 10:55:49 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:30.233 10:55:49 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:30.233 10:55:49 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:30.233 10:55:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:30.233 10:55:49 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1281527 00:07:30.233 10:55:49 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:30.233 10:55:49 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1281527 00:07:30.233 10:55:49 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 1281527 ']' 00:07:30.233 10:55:49 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.233 10:55:49 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:30.233 10:55:49 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.233 10:55:49 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:30.233 10:55:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:30.233 [2024-07-26 10:55:49.625819] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:30.233 [2024-07-26 10:55:49.625862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1281527 ] 00:07:30.233 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.233 [2024-07-26 10:55:49.680724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:30.493 [2024-07-26 10:55:49.756007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.493 [2024-07-26 10:55:49.756008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.063 10:55:50 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:31.063 10:55:50 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:07:31.063 10:55:50 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1281751 00:07:31.063 10:55:50 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:31.063 10:55:50 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:31.324 [ 00:07:31.324 "bdev_malloc_delete", 00:07:31.324 "bdev_malloc_create", 00:07:31.324 "bdev_null_resize", 00:07:31.324 "bdev_null_delete", 00:07:31.324 "bdev_null_create", 00:07:31.324 "bdev_nvme_cuse_unregister", 00:07:31.324 "bdev_nvme_cuse_register", 00:07:31.324 "bdev_opal_new_user", 00:07:31.324 "bdev_opal_set_lock_state", 00:07:31.324 "bdev_opal_delete", 00:07:31.324 "bdev_opal_get_info", 00:07:31.324 "bdev_opal_create", 00:07:31.324 "bdev_nvme_opal_revert", 00:07:31.324 "bdev_nvme_opal_init", 00:07:31.324 "bdev_nvme_send_cmd", 00:07:31.324 "bdev_nvme_get_path_iostat", 00:07:31.324 "bdev_nvme_get_mdns_discovery_info", 00:07:31.324 "bdev_nvme_stop_mdns_discovery", 00:07:31.324 "bdev_nvme_start_mdns_discovery", 00:07:31.324 "bdev_nvme_set_multipath_policy", 00:07:31.324 "bdev_nvme_set_preferred_path", 00:07:31.324 "bdev_nvme_get_io_paths", 00:07:31.324 "bdev_nvme_remove_error_injection", 00:07:31.324 "bdev_nvme_add_error_injection", 00:07:31.324 "bdev_nvme_get_discovery_info", 00:07:31.324 "bdev_nvme_stop_discovery", 00:07:31.324 "bdev_nvme_start_discovery", 00:07:31.324 "bdev_nvme_get_controller_health_info", 00:07:31.324 "bdev_nvme_disable_controller", 00:07:31.324 "bdev_nvme_enable_controller", 00:07:31.324 "bdev_nvme_reset_controller", 00:07:31.325 "bdev_nvme_get_transport_statistics", 00:07:31.325 "bdev_nvme_apply_firmware", 00:07:31.325 "bdev_nvme_detach_controller", 00:07:31.325 "bdev_nvme_get_controllers", 00:07:31.325 "bdev_nvme_attach_controller", 00:07:31.325 "bdev_nvme_set_hotplug", 00:07:31.325 "bdev_nvme_set_options", 00:07:31.325 "bdev_passthru_delete", 00:07:31.325 "bdev_passthru_create", 00:07:31.325 "bdev_lvol_set_parent_bdev", 00:07:31.325 "bdev_lvol_set_parent", 00:07:31.325 "bdev_lvol_check_shallow_copy", 00:07:31.325 "bdev_lvol_start_shallow_copy", 00:07:31.325 "bdev_lvol_grow_lvstore", 00:07:31.325 "bdev_lvol_get_lvols", 00:07:31.325 "bdev_lvol_get_lvstores", 00:07:31.325 "bdev_lvol_delete", 00:07:31.325 "bdev_lvol_set_read_only", 00:07:31.325 "bdev_lvol_resize", 00:07:31.325 "bdev_lvol_decouple_parent", 00:07:31.325 "bdev_lvol_inflate", 00:07:31.325 "bdev_lvol_rename", 00:07:31.325 "bdev_lvol_clone_bdev", 00:07:31.325 "bdev_lvol_clone", 00:07:31.325 "bdev_lvol_snapshot", 00:07:31.325 "bdev_lvol_create", 00:07:31.325 "bdev_lvol_delete_lvstore", 00:07:31.325 "bdev_lvol_rename_lvstore", 00:07:31.325 "bdev_lvol_create_lvstore", 00:07:31.325 "bdev_raid_set_options", 00:07:31.325 "bdev_raid_remove_base_bdev", 00:07:31.325 "bdev_raid_add_base_bdev", 00:07:31.325 "bdev_raid_delete", 00:07:31.325 "bdev_raid_create", 00:07:31.325 "bdev_raid_get_bdevs", 00:07:31.325 "bdev_error_inject_error", 00:07:31.325 "bdev_error_delete", 00:07:31.325 "bdev_error_create", 00:07:31.325 "bdev_split_delete", 00:07:31.325 "bdev_split_create", 00:07:31.325 "bdev_delay_delete", 00:07:31.325 "bdev_delay_create", 00:07:31.325 "bdev_delay_update_latency", 00:07:31.325 "bdev_zone_block_delete", 00:07:31.325 "bdev_zone_block_create", 00:07:31.325 "blobfs_create", 00:07:31.325 "blobfs_detect", 00:07:31.325 "blobfs_set_cache_size", 00:07:31.325 "bdev_aio_delete", 00:07:31.325 "bdev_aio_rescan", 00:07:31.325 "bdev_aio_create", 00:07:31.325 "bdev_ftl_set_property", 00:07:31.325 "bdev_ftl_get_properties", 00:07:31.325 "bdev_ftl_get_stats", 00:07:31.325 "bdev_ftl_unmap", 00:07:31.325 "bdev_ftl_unload", 00:07:31.325 "bdev_ftl_delete", 00:07:31.325 "bdev_ftl_load", 00:07:31.325 "bdev_ftl_create", 00:07:31.325 "bdev_virtio_attach_controller", 00:07:31.325 "bdev_virtio_scsi_get_devices", 00:07:31.325 "bdev_virtio_detach_controller", 00:07:31.325 "bdev_virtio_blk_set_hotplug", 00:07:31.325 "bdev_iscsi_delete", 00:07:31.325 "bdev_iscsi_create", 00:07:31.325 "bdev_iscsi_set_options", 00:07:31.325 "accel_error_inject_error", 00:07:31.325 "ioat_scan_accel_module", 00:07:31.325 "dsa_scan_accel_module", 00:07:31.325 "iaa_scan_accel_module", 00:07:31.325 "vfu_virtio_create_scsi_endpoint", 00:07:31.325 "vfu_virtio_scsi_remove_target", 00:07:31.325 "vfu_virtio_scsi_add_target", 00:07:31.325 "vfu_virtio_create_blk_endpoint", 00:07:31.325 "vfu_virtio_delete_endpoint", 00:07:31.325 "keyring_file_remove_key", 00:07:31.325 "keyring_file_add_key", 00:07:31.325 "keyring_linux_set_options", 00:07:31.325 "iscsi_get_histogram", 00:07:31.325 "iscsi_enable_histogram", 00:07:31.325 "iscsi_set_options", 00:07:31.325 "iscsi_get_auth_groups", 00:07:31.325 "iscsi_auth_group_remove_secret", 00:07:31.325 "iscsi_auth_group_add_secret", 00:07:31.325 "iscsi_delete_auth_group", 00:07:31.325 "iscsi_create_auth_group", 00:07:31.325 "iscsi_set_discovery_auth", 00:07:31.325 "iscsi_get_options", 00:07:31.325 "iscsi_target_node_request_logout", 00:07:31.325 "iscsi_target_node_set_redirect", 00:07:31.325 "iscsi_target_node_set_auth", 00:07:31.325 "iscsi_target_node_add_lun", 00:07:31.325 "iscsi_get_stats", 00:07:31.325 "iscsi_get_connections", 00:07:31.325 "iscsi_portal_group_set_auth", 00:07:31.325 "iscsi_start_portal_group", 00:07:31.325 "iscsi_delete_portal_group", 00:07:31.325 "iscsi_create_portal_group", 00:07:31.325 "iscsi_get_portal_groups", 00:07:31.325 "iscsi_delete_target_node", 00:07:31.325 "iscsi_target_node_remove_pg_ig_maps", 00:07:31.325 "iscsi_target_node_add_pg_ig_maps", 00:07:31.325 "iscsi_create_target_node", 00:07:31.325 "iscsi_get_target_nodes", 00:07:31.325 "iscsi_delete_initiator_group", 00:07:31.325 "iscsi_initiator_group_remove_initiators", 00:07:31.325 "iscsi_initiator_group_add_initiators", 00:07:31.325 "iscsi_create_initiator_group", 00:07:31.325 "iscsi_get_initiator_groups", 00:07:31.325 "nvmf_set_crdt", 00:07:31.325 "nvmf_set_config", 00:07:31.325 "nvmf_set_max_subsystems", 00:07:31.325 "nvmf_stop_mdns_prr", 00:07:31.325 "nvmf_publish_mdns_prr", 00:07:31.325 "nvmf_subsystem_get_listeners", 00:07:31.325 "nvmf_subsystem_get_qpairs", 00:07:31.325 "nvmf_subsystem_get_controllers", 00:07:31.325 "nvmf_get_stats", 00:07:31.325 "nvmf_get_transports", 00:07:31.325 "nvmf_create_transport", 00:07:31.325 "nvmf_get_targets", 00:07:31.325 "nvmf_delete_target", 00:07:31.325 "nvmf_create_target", 00:07:31.325 "nvmf_subsystem_allow_any_host", 00:07:31.325 "nvmf_subsystem_remove_host", 00:07:31.325 "nvmf_subsystem_add_host", 00:07:31.325 "nvmf_ns_remove_host", 00:07:31.325 "nvmf_ns_add_host", 00:07:31.325 "nvmf_subsystem_remove_ns", 00:07:31.325 "nvmf_subsystem_add_ns", 00:07:31.325 "nvmf_subsystem_listener_set_ana_state", 00:07:31.325 "nvmf_discovery_get_referrals", 00:07:31.325 "nvmf_discovery_remove_referral", 00:07:31.325 "nvmf_discovery_add_referral", 00:07:31.325 "nvmf_subsystem_remove_listener", 00:07:31.325 "nvmf_subsystem_add_listener", 00:07:31.325 "nvmf_delete_subsystem", 00:07:31.325 "nvmf_create_subsystem", 00:07:31.325 "nvmf_get_subsystems", 00:07:31.325 "env_dpdk_get_mem_stats", 00:07:31.325 "nbd_get_disks", 00:07:31.325 "nbd_stop_disk", 00:07:31.325 "nbd_start_disk", 00:07:31.325 "ublk_recover_disk", 00:07:31.325 "ublk_get_disks", 00:07:31.325 "ublk_stop_disk", 00:07:31.325 "ublk_start_disk", 00:07:31.325 "ublk_destroy_target", 00:07:31.325 "ublk_create_target", 00:07:31.325 "virtio_blk_create_transport", 00:07:31.325 "virtio_blk_get_transports", 00:07:31.325 "vhost_controller_set_coalescing", 00:07:31.325 "vhost_get_controllers", 00:07:31.325 "vhost_delete_controller", 00:07:31.325 "vhost_create_blk_controller", 00:07:31.325 "vhost_scsi_controller_remove_target", 00:07:31.325 "vhost_scsi_controller_add_target", 00:07:31.325 "vhost_start_scsi_controller", 00:07:31.325 "vhost_create_scsi_controller", 00:07:31.325 "thread_set_cpumask", 00:07:31.325 "framework_get_governor", 00:07:31.325 "framework_get_scheduler", 00:07:31.325 "framework_set_scheduler", 00:07:31.325 "framework_get_reactors", 00:07:31.325 "thread_get_io_channels", 00:07:31.325 "thread_get_pollers", 00:07:31.325 "thread_get_stats", 00:07:31.325 "framework_monitor_context_switch", 00:07:31.325 "spdk_kill_instance", 00:07:31.325 "log_enable_timestamps", 00:07:31.325 "log_get_flags", 00:07:31.325 "log_clear_flag", 00:07:31.325 "log_set_flag", 00:07:31.325 "log_get_level", 00:07:31.325 "log_set_level", 00:07:31.325 "log_get_print_level", 00:07:31.325 "log_set_print_level", 00:07:31.325 "framework_enable_cpumask_locks", 00:07:31.325 "framework_disable_cpumask_locks", 00:07:31.325 "framework_wait_init", 00:07:31.325 "framework_start_init", 00:07:31.325 "scsi_get_devices", 00:07:31.325 "bdev_get_histogram", 00:07:31.325 "bdev_enable_histogram", 00:07:31.325 "bdev_set_qos_limit", 00:07:31.325 "bdev_set_qd_sampling_period", 00:07:31.325 "bdev_get_bdevs", 00:07:31.325 "bdev_reset_iostat", 00:07:31.325 "bdev_get_iostat", 00:07:31.325 "bdev_examine", 00:07:31.325 "bdev_wait_for_examine", 00:07:31.325 "bdev_set_options", 00:07:31.325 "notify_get_notifications", 00:07:31.325 "notify_get_types", 00:07:31.325 "accel_get_stats", 00:07:31.325 "accel_set_options", 00:07:31.325 "accel_set_driver", 00:07:31.325 "accel_crypto_key_destroy", 00:07:31.325 "accel_crypto_keys_get", 00:07:31.325 "accel_crypto_key_create", 00:07:31.325 "accel_assign_opc", 00:07:31.325 "accel_get_module_info", 00:07:31.325 "accel_get_opc_assignments", 00:07:31.325 "vmd_rescan", 00:07:31.325 "vmd_remove_device", 00:07:31.325 "vmd_enable", 00:07:31.325 "sock_get_default_impl", 00:07:31.325 "sock_set_default_impl", 00:07:31.325 "sock_impl_set_options", 00:07:31.325 "sock_impl_get_options", 00:07:31.325 "iobuf_get_stats", 00:07:31.325 "iobuf_set_options", 00:07:31.325 "keyring_get_keys", 00:07:31.325 "framework_get_pci_devices", 00:07:31.325 "framework_get_config", 00:07:31.325 "framework_get_subsystems", 00:07:31.325 "vfu_tgt_set_base_path", 00:07:31.325 "trace_get_info", 00:07:31.325 "trace_get_tpoint_group_mask", 00:07:31.325 "trace_disable_tpoint_group", 00:07:31.325 "trace_enable_tpoint_group", 00:07:31.325 "trace_clear_tpoint_mask", 00:07:31.325 "trace_set_tpoint_mask", 00:07:31.325 "spdk_get_version", 00:07:31.325 "rpc_get_methods" 00:07:31.325 ] 00:07:31.325 10:55:50 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:31.325 10:55:50 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:31.326 10:55:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:31.326 10:55:50 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:31.326 10:55:50 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1281527 00:07:31.326 10:55:50 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 1281527 ']' 00:07:31.326 10:55:50 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 1281527 00:07:31.326 10:55:50 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:07:31.326 10:55:50 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:31.326 10:55:50 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1281527 00:07:31.326 10:55:50 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:31.326 10:55:50 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:31.326 10:55:50 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1281527' 00:07:31.326 killing process with pid 1281527 00:07:31.326 10:55:50 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 1281527 00:07:31.326 10:55:50 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 1281527 00:07:31.586 00:07:31.586 real 0m1.509s 00:07:31.586 user 0m2.839s 00:07:31.586 sys 0m0.408s 00:07:31.586 10:55:50 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:31.586 10:55:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:31.586 ************************************ 00:07:31.586 END TEST spdkcli_tcp 00:07:31.586 ************************************ 00:07:31.586 10:55:51 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:31.586 10:55:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:31.586 10:55:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:31.586 10:55:51 -- common/autotest_common.sh@10 -- # set +x 00:07:31.586 ************************************ 00:07:31.586 START TEST dpdk_mem_utility 00:07:31.586 ************************************ 00:07:31.586 10:55:51 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:31.846 * Looking for test storage... 00:07:31.846 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:07:31.846 10:55:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:31.846 10:55:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1281830 00:07:31.846 10:55:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:31.846 10:55:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1281830 00:07:31.846 10:55:51 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 1281830 ']' 00:07:31.846 10:55:51 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.846 10:55:51 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:31.846 10:55:51 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.846 10:55:51 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:31.846 10:55:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:31.846 [2024-07-26 10:55:51.184955] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:31.846 [2024-07-26 10:55:51.185001] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1281830 ] 00:07:31.846 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.846 [2024-07-26 10:55:51.236726] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.846 [2024-07-26 10:55:51.312418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.788 10:55:51 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:32.788 10:55:51 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:07:32.788 10:55:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:32.788 10:55:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:32.788 10:55:51 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.788 10:55:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:32.788 { 00:07:32.788 "filename": "/tmp/spdk_mem_dump.txt" 00:07:32.788 } 00:07:32.788 10:55:51 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.788 10:55:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:32.788 DPDK memory size 814.000000 MiB in 1 heap(s) 00:07:32.788 1 heaps totaling size 814.000000 MiB 00:07:32.788 size: 814.000000 MiB heap id: 0 00:07:32.788 end heaps---------- 00:07:32.788 8 mempools totaling size 598.116089 MiB 00:07:32.788 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:32.788 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:32.788 size: 84.521057 MiB name: bdev_io_1281830 00:07:32.788 size: 51.011292 MiB name: evtpool_1281830 00:07:32.788 size: 50.003479 MiB name: msgpool_1281830 00:07:32.788 size: 21.763794 MiB name: PDU_Pool 00:07:32.788 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:32.788 size: 0.026123 MiB name: Session_Pool 00:07:32.789 end mempools------- 00:07:32.789 6 memzones totaling size 4.142822 MiB 00:07:32.789 size: 1.000366 MiB name: RG_ring_0_1281830 00:07:32.789 size: 1.000366 MiB name: RG_ring_1_1281830 00:07:32.789 size: 1.000366 MiB name: RG_ring_4_1281830 00:07:32.789 size: 1.000366 MiB name: RG_ring_5_1281830 00:07:32.789 size: 0.125366 MiB name: RG_ring_2_1281830 00:07:32.789 size: 0.015991 MiB name: RG_ring_3_1281830 00:07:32.789 end memzones------- 00:07:32.789 10:55:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:07:32.789 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:07:32.789 list of free elements. size: 12.519348 MiB 00:07:32.789 element at address: 0x200000400000 with size: 1.999512 MiB 00:07:32.789 element at address: 0x200018e00000 with size: 0.999878 MiB 00:07:32.789 element at address: 0x200019000000 with size: 0.999878 MiB 00:07:32.789 element at address: 0x200003e00000 with size: 0.996277 MiB 00:07:32.789 element at address: 0x200031c00000 with size: 0.994446 MiB 00:07:32.789 element at address: 0x200013800000 with size: 0.978699 MiB 00:07:32.789 element at address: 0x200007000000 with size: 0.959839 MiB 00:07:32.789 element at address: 0x200019200000 with size: 0.936584 MiB 00:07:32.789 element at address: 0x200000200000 with size: 0.841614 MiB 00:07:32.789 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:07:32.789 element at address: 0x20000b200000 with size: 0.490723 MiB 00:07:32.789 element at address: 0x200000800000 with size: 0.487793 MiB 00:07:32.789 element at address: 0x200019400000 with size: 0.485657 MiB 00:07:32.789 element at address: 0x200027e00000 with size: 0.410034 MiB 00:07:32.789 element at address: 0x200003a00000 with size: 0.355530 MiB 00:07:32.789 list of standard malloc elements. size: 199.218079 MiB 00:07:32.789 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:07:32.789 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:07:32.789 element at address: 0x200018efff80 with size: 1.000122 MiB 00:07:32.789 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:07:32.789 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:07:32.789 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:32.789 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:07:32.789 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:32.789 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:07:32.789 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:07:32.789 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:07:32.789 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:07:32.789 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:07:32.789 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:07:32.789 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:32.789 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:32.789 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:07:32.789 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:07:32.789 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:07:32.789 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:07:32.789 element at address: 0x200003adb300 with size: 0.000183 MiB 00:07:32.789 element at address: 0x200003adb500 with size: 0.000183 MiB 00:07:32.789 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:07:32.789 element at address: 0x200003affa80 with size: 0.000183 MiB 00:07:32.789 element at address: 0x200003affb40 with size: 0.000183 MiB 00:07:32.789 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:07:32.789 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:07:32.789 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:07:32.789 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:07:32.789 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:07:32.789 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:07:32.789 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:07:32.789 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:07:32.789 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:07:32.789 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:07:32.789 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:07:32.789 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:07:32.789 element at address: 0x200027e69040 with size: 0.000183 MiB 00:07:32.789 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:07:32.789 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:07:32.789 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:07:32.789 list of memzone associated elements. size: 602.262573 MiB 00:07:32.789 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:07:32.789 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:32.789 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:07:32.789 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:32.789 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:07:32.789 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1281830_0 00:07:32.789 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:07:32.789 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1281830_0 00:07:32.789 element at address: 0x200003fff380 with size: 48.003052 MiB 00:07:32.789 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1281830_0 00:07:32.789 element at address: 0x2000195be940 with size: 20.255554 MiB 00:07:32.789 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:32.789 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:07:32.789 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:32.789 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:07:32.789 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1281830 00:07:32.789 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:07:32.789 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1281830 00:07:32.789 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:32.789 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1281830 00:07:32.789 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:07:32.789 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:32.789 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:07:32.789 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:32.789 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:07:32.789 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:32.789 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:07:32.789 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:32.789 element at address: 0x200003eff180 with size: 1.000488 MiB 00:07:32.789 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1281830 00:07:32.789 element at address: 0x200003affc00 with size: 1.000488 MiB 00:07:32.789 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1281830 00:07:32.789 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:07:32.789 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1281830 00:07:32.789 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:07:32.789 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1281830 00:07:32.789 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:07:32.789 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1281830 00:07:32.789 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:07:32.789 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:32.789 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:07:32.789 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:32.789 element at address: 0x20001947c540 with size: 0.250488 MiB 00:07:32.789 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:32.789 element at address: 0x200003adf880 with size: 0.125488 MiB 00:07:32.789 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1281830 00:07:32.789 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:07:32.789 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:32.789 element at address: 0x200027e69100 with size: 0.023743 MiB 00:07:32.789 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:32.789 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:07:32.789 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1281830 00:07:32.789 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:07:32.789 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:32.789 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:07:32.789 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1281830 00:07:32.789 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:07:32.789 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1281830 00:07:32.789 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:07:32.789 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:32.789 10:55:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:32.789 10:55:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1281830 00:07:32.789 10:55:52 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 1281830 ']' 00:07:32.789 10:55:52 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 1281830 00:07:32.789 10:55:52 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:07:32.789 10:55:52 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:32.789 10:55:52 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1281830 00:07:32.789 10:55:52 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:32.789 10:55:52 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:32.789 10:55:52 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1281830' 00:07:32.789 killing process with pid 1281830 00:07:32.790 10:55:52 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 1281830 00:07:32.790 10:55:52 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 1281830 00:07:33.050 00:07:33.050 real 0m1.371s 00:07:33.050 user 0m1.428s 00:07:33.050 sys 0m0.403s 00:07:33.050 10:55:52 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:33.050 10:55:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:33.050 ************************************ 00:07:33.051 END TEST dpdk_mem_utility 00:07:33.051 ************************************ 00:07:33.051 10:55:52 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:33.051 10:55:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:33.051 10:55:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:33.051 10:55:52 -- common/autotest_common.sh@10 -- # set +x 00:07:33.051 ************************************ 00:07:33.051 START TEST event 00:07:33.051 ************************************ 00:07:33.051 10:55:52 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:33.311 * Looking for test storage... 00:07:33.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:33.311 10:55:52 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:33.311 10:55:52 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:33.311 10:55:52 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:33.311 10:55:52 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:33.311 10:55:52 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:33.311 10:55:52 event -- common/autotest_common.sh@10 -- # set +x 00:07:33.311 ************************************ 00:07:33.311 START TEST event_perf 00:07:33.311 ************************************ 00:07:33.311 10:55:52 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:33.311 Running I/O for 1 seconds...[2024-07-26 10:55:52.633766] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:33.311 [2024-07-26 10:55:52.633834] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1282120 ] 00:07:33.311 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.311 [2024-07-26 10:55:52.693980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:33.311 [2024-07-26 10:55:52.771813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.311 [2024-07-26 10:55:52.771907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.311 [2024-07-26 10:55:52.771995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:33.311 [2024-07-26 10:55:52.771997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.694 Running I/O for 1 seconds... 00:07:34.694 lcore 0: 211234 00:07:34.694 lcore 1: 211232 00:07:34.694 lcore 2: 211233 00:07:34.694 lcore 3: 211233 00:07:34.694 done. 00:07:34.694 00:07:34.694 real 0m1.230s 00:07:34.694 user 0m4.146s 00:07:34.694 sys 0m0.081s 00:07:34.694 10:55:53 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.694 10:55:53 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:34.694 ************************************ 00:07:34.694 END TEST event_perf 00:07:34.694 ************************************ 00:07:34.694 10:55:53 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:34.694 10:55:53 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:34.694 10:55:53 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.694 10:55:53 event -- common/autotest_common.sh@10 -- # set +x 00:07:34.694 ************************************ 00:07:34.694 START TEST event_reactor 00:07:34.694 ************************************ 00:07:34.694 10:55:53 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:34.694 [2024-07-26 10:55:53.931479] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:34.694 [2024-07-26 10:55:53.931540] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1282374 ] 00:07:34.694 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.694 [2024-07-26 10:55:53.988921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.694 [2024-07-26 10:55:54.060926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.636 test_start 00:07:35.636 oneshot 00:07:35.636 tick 100 00:07:35.636 tick 100 00:07:35.636 tick 250 00:07:35.636 tick 100 00:07:35.636 tick 100 00:07:35.636 tick 250 00:07:35.636 tick 100 00:07:35.636 tick 500 00:07:35.636 tick 100 00:07:35.636 tick 100 00:07:35.636 tick 250 00:07:35.636 tick 100 00:07:35.636 tick 100 00:07:35.636 test_end 00:07:35.636 00:07:35.636 real 0m1.220s 00:07:35.636 user 0m1.145s 00:07:35.636 sys 0m0.071s 00:07:35.636 10:55:55 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.636 10:55:55 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:35.636 ************************************ 00:07:35.636 END TEST event_reactor 00:07:35.636 ************************************ 00:07:35.896 10:55:55 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:35.896 10:55:55 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:35.896 10:55:55 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.896 10:55:55 event -- common/autotest_common.sh@10 -- # set +x 00:07:35.896 ************************************ 00:07:35.896 START TEST event_reactor_perf 00:07:35.896 ************************************ 00:07:35.896 10:55:55 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:35.896 [2024-07-26 10:55:55.214211] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:35.896 [2024-07-26 10:55:55.214292] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1282620 ] 00:07:35.896 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.896 [2024-07-26 10:55:55.269909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.896 [2024-07-26 10:55:55.341214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.281 test_start 00:07:37.281 test_end 00:07:37.281 Performance: 495245 events per second 00:07:37.281 00:07:37.281 real 0m1.218s 00:07:37.281 user 0m1.142s 00:07:37.281 sys 0m0.072s 00:07:37.281 10:55:56 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.281 10:55:56 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:37.281 ************************************ 00:07:37.281 END TEST event_reactor_perf 00:07:37.281 ************************************ 00:07:37.281 10:55:56 event -- event/event.sh@49 -- # uname -s 00:07:37.281 10:55:56 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:37.281 10:55:56 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:37.281 10:55:56 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:37.281 10:55:56 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.281 10:55:56 event -- common/autotest_common.sh@10 -- # set +x 00:07:37.281 ************************************ 00:07:37.281 START TEST event_scheduler 00:07:37.281 ************************************ 00:07:37.281 10:55:56 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:37.281 * Looking for test storage... 00:07:37.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:07:37.281 10:55:56 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:37.281 10:55:56 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1282895 00:07:37.281 10:55:56 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:37.281 10:55:56 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:37.281 10:55:56 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1282895 00:07:37.281 10:55:56 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 1282895 ']' 00:07:37.281 10:55:56 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.281 10:55:56 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:37.281 10:55:56 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.281 10:55:56 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:37.281 10:55:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:37.281 [2024-07-26 10:55:56.620541] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:37.281 [2024-07-26 10:55:56.620584] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1282895 ] 00:07:37.281 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.281 [2024-07-26 10:55:56.671465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:37.281 [2024-07-26 10:55:56.747221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.281 [2024-07-26 10:55:56.747307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.281 [2024-07-26 10:55:56.747392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:37.281 [2024-07-26 10:55:56.747394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:38.223 10:55:57 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:38.223 10:55:57 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:07:38.223 10:55:57 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:38.223 10:55:57 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.223 10:55:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:38.223 [2024-07-26 10:55:57.429767] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:07:38.223 [2024-07-26 10:55:57.429785] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:07:38.223 [2024-07-26 10:55:57.429794] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:38.223 [2024-07-26 10:55:57.429800] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:38.223 [2024-07-26 10:55:57.429805] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:38.223 10:55:57 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.223 10:55:57 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:38.223 10:55:57 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.223 10:55:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:38.223 [2024-07-26 10:55:57.501790] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:38.223 10:55:57 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.223 10:55:57 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:38.223 10:55:57 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:38.223 10:55:57 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.223 10:55:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:38.223 ************************************ 00:07:38.223 START TEST scheduler_create_thread 00:07:38.223 ************************************ 00:07:38.223 10:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:07:38.223 10:55:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:38.223 10:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.223 10:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:38.223 2 00:07:38.223 10:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.223 10:55:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:38.223 10:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.223 10:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:38.223 3 00:07:38.223 10:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.223 10:55:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:38.223 10:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.223 10:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:38.223 4 00:07:38.223 10:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.223 10:55:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:38.223 10:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.223 10:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:38.223 5 00:07:38.223 10:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.223 10:55:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:38.223 10:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.223 10:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:38.223 6 00:07:38.223 10:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.223 10:55:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:38.223 10:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.223 10:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:38.223 7 00:07:38.223 10:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.223 10:55:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:38.223 10:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.223 10:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:38.223 8 00:07:38.223 10:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.223 10:55:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:38.224 10:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.224 10:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:38.224 9 00:07:38.224 10:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.224 10:55:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:38.224 10:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.224 10:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:38.224 10 00:07:38.224 10:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.224 10:55:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:38.224 10:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.224 10:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:38.224 10:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.224 10:55:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:38.224 10:55:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:38.224 10:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.224 10:55:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:39.165 10:55:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.165 10:55:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:39.165 10:55:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.165 10:55:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:40.551 10:55:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.551 10:55:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:40.551 10:55:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:40.551 10:55:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.551 10:55:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:41.491 10:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.491 00:07:41.491 real 0m3.382s 00:07:41.491 user 0m0.022s 00:07:41.491 sys 0m0.007s 00:07:41.491 10:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.491 10:56:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:41.491 ************************************ 00:07:41.491 END TEST scheduler_create_thread 00:07:41.491 ************************************ 00:07:41.491 10:56:00 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:41.491 10:56:00 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1282895 00:07:41.491 10:56:00 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 1282895 ']' 00:07:41.491 10:56:00 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 1282895 00:07:41.491 10:56:00 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:07:41.491 10:56:00 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:41.491 10:56:00 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1282895 00:07:41.750 10:56:00 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:41.751 10:56:00 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:41.751 10:56:00 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1282895' 00:07:41.751 killing process with pid 1282895 00:07:41.751 10:56:00 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 1282895 00:07:41.751 10:56:00 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 1282895 00:07:42.013 [2024-07-26 10:56:01.298000] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:42.353 00:07:42.353 real 0m5.046s 00:07:42.353 user 0m10.415s 00:07:42.353 sys 0m0.358s 00:07:42.353 10:56:01 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.353 10:56:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:42.353 ************************************ 00:07:42.353 END TEST event_scheduler 00:07:42.353 ************************************ 00:07:42.353 10:56:01 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:42.353 10:56:01 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:42.353 10:56:01 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:42.353 10:56:01 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.353 10:56:01 event -- common/autotest_common.sh@10 -- # set +x 00:07:42.353 ************************************ 00:07:42.353 START TEST app_repeat 00:07:42.353 ************************************ 00:07:42.353 10:56:01 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:07:42.353 10:56:01 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:42.353 10:56:01 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:42.353 10:56:01 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:42.353 10:56:01 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:42.353 10:56:01 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:42.353 10:56:01 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:42.353 10:56:01 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:42.353 10:56:01 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1283859 00:07:42.353 10:56:01 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:42.353 10:56:01 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:42.353 10:56:01 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1283859' 00:07:42.353 Process app_repeat pid: 1283859 00:07:42.353 10:56:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:42.353 10:56:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:42.353 spdk_app_start Round 0 00:07:42.353 10:56:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1283859 /var/tmp/spdk-nbd.sock 00:07:42.353 10:56:01 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1283859 ']' 00:07:42.353 10:56:01 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:42.353 10:56:01 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:42.353 10:56:01 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:42.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:42.353 10:56:01 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:42.353 10:56:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:42.353 [2024-07-26 10:56:01.627031] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:42.353 [2024-07-26 10:56:01.627086] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1283859 ] 00:07:42.353 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.353 [2024-07-26 10:56:01.681792] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:42.353 [2024-07-26 10:56:01.761502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.353 [2024-07-26 10:56:01.761505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.295 10:56:02 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:43.295 10:56:02 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:43.295 10:56:02 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:43.295 Malloc0 00:07:43.295 10:56:02 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:43.557 Malloc1 00:07:43.557 10:56:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:43.557 10:56:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.557 10:56:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:43.557 10:56:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:43.557 10:56:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:43.557 10:56:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:43.557 10:56:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:43.557 10:56:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.557 10:56:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:43.557 10:56:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:43.557 10:56:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:43.557 10:56:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:43.557 10:56:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:43.557 10:56:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:43.557 10:56:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:43.557 10:56:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:43.557 /dev/nbd0 00:07:43.557 10:56:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:43.557 10:56:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:43.557 10:56:03 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:43.557 10:56:03 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:43.557 10:56:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:43.557 10:56:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:43.557 10:56:03 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:43.557 10:56:03 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:43.557 10:56:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:43.557 10:56:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:43.557 10:56:03 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:43.557 1+0 records in 00:07:43.557 1+0 records out 00:07:43.557 4096 bytes (4.1 kB, 4.0 KiB) copied, 9.9843e-05 s, 41.0 MB/s 00:07:43.557 10:56:03 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:43.557 10:56:03 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:43.557 10:56:03 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:43.557 10:56:03 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:43.557 10:56:03 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:43.557 10:56:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:43.557 10:56:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:43.557 10:56:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:43.818 /dev/nbd1 00:07:43.818 10:56:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:43.818 10:56:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:43.818 10:56:03 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:43.818 10:56:03 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:43.818 10:56:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:43.818 10:56:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:43.818 10:56:03 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:43.818 10:56:03 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:43.818 10:56:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:43.818 10:56:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:43.818 10:56:03 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:43.818 1+0 records in 00:07:43.818 1+0 records out 00:07:43.818 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000204965 s, 20.0 MB/s 00:07:43.818 10:56:03 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:43.818 10:56:03 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:43.818 10:56:03 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:43.818 10:56:03 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:43.818 10:56:03 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:43.818 10:56:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:43.818 10:56:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:43.818 10:56:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:43.818 10:56:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.818 10:56:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:44.077 10:56:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:44.077 { 00:07:44.077 "nbd_device": "/dev/nbd0", 00:07:44.077 "bdev_name": "Malloc0" 00:07:44.077 }, 00:07:44.077 { 00:07:44.077 "nbd_device": "/dev/nbd1", 00:07:44.077 "bdev_name": "Malloc1" 00:07:44.077 } 00:07:44.077 ]' 00:07:44.077 10:56:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:44.077 { 00:07:44.077 "nbd_device": "/dev/nbd0", 00:07:44.077 "bdev_name": "Malloc0" 00:07:44.077 }, 00:07:44.077 { 00:07:44.077 "nbd_device": "/dev/nbd1", 00:07:44.077 "bdev_name": "Malloc1" 00:07:44.077 } 00:07:44.077 ]' 00:07:44.077 10:56:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:44.077 10:56:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:44.077 /dev/nbd1' 00:07:44.077 10:56:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:44.077 /dev/nbd1' 00:07:44.077 10:56:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:44.077 10:56:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:44.077 10:56:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:44.077 10:56:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:44.077 10:56:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:44.077 10:56:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:44.077 10:56:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:44.077 10:56:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:44.077 10:56:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:44.077 10:56:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:44.077 10:56:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:44.077 10:56:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:44.077 256+0 records in 00:07:44.077 256+0 records out 00:07:44.077 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104065 s, 101 MB/s 00:07:44.077 10:56:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:44.077 10:56:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:44.077 256+0 records in 00:07:44.077 256+0 records out 00:07:44.077 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135465 s, 77.4 MB/s 00:07:44.077 10:56:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:44.077 10:56:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:44.077 256+0 records in 00:07:44.077 256+0 records out 00:07:44.077 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146049 s, 71.8 MB/s 00:07:44.077 10:56:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:44.077 10:56:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:44.077 10:56:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:44.077 10:56:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:44.077 10:56:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:44.077 10:56:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:44.077 10:56:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:44.077 10:56:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:44.077 10:56:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:44.077 10:56:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:44.077 10:56:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:44.077 10:56:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:44.077 10:56:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:44.077 10:56:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.077 10:56:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:44.077 10:56:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:44.077 10:56:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:44.077 10:56:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:44.077 10:56:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:44.336 10:56:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:44.337 10:56:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:44.337 10:56:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:44.337 10:56:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:44.337 10:56:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:44.337 10:56:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:44.337 10:56:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:44.337 10:56:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:44.337 10:56:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:44.337 10:56:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:44.597 10:56:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:44.597 10:56:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:44.597 10:56:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:44.597 10:56:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:44.597 10:56:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:44.597 10:56:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:44.597 10:56:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:44.597 10:56:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:44.597 10:56:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:44.597 10:56:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.597 10:56:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:44.597 10:56:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:44.597 10:56:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:44.597 10:56:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:44.857 10:56:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:44.857 10:56:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:44.857 10:56:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:44.857 10:56:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:44.857 10:56:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:44.857 10:56:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:44.857 10:56:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:44.857 10:56:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:44.857 10:56:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:44.857 10:56:04 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:44.857 10:56:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:45.117 [2024-07-26 10:56:04.489729] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:45.117 [2024-07-26 10:56:04.555988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.117 [2024-07-26 10:56:04.555990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.117 [2024-07-26 10:56:04.596703] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:45.117 [2024-07-26 10:56:04.596750] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:48.415 10:56:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:48.415 10:56:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:48.415 spdk_app_start Round 1 00:07:48.415 10:56:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1283859 /var/tmp/spdk-nbd.sock 00:07:48.415 10:56:07 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1283859 ']' 00:07:48.415 10:56:07 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:48.415 10:56:07 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:48.415 10:56:07 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:48.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:48.415 10:56:07 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:48.415 10:56:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:48.415 10:56:07 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:48.415 10:56:07 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:48.415 10:56:07 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:48.415 Malloc0 00:07:48.415 10:56:07 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:48.415 Malloc1 00:07:48.415 10:56:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:48.415 10:56:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:48.415 10:56:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:48.415 10:56:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:48.415 10:56:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:48.415 10:56:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:48.415 10:56:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:48.415 10:56:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:48.415 10:56:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:48.415 10:56:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:48.415 10:56:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:48.415 10:56:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:48.415 10:56:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:48.415 10:56:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:48.415 10:56:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:48.415 10:56:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:48.675 /dev/nbd0 00:07:48.675 10:56:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:48.675 10:56:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:48.675 10:56:08 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:48.675 10:56:08 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:48.675 10:56:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:48.675 10:56:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:48.675 10:56:08 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:48.675 10:56:08 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:48.675 10:56:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:48.675 10:56:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:48.675 10:56:08 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:48.675 1+0 records in 00:07:48.675 1+0 records out 00:07:48.675 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226842 s, 18.1 MB/s 00:07:48.675 10:56:08 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:48.675 10:56:08 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:48.675 10:56:08 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:48.675 10:56:08 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:48.675 10:56:08 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:48.675 10:56:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:48.675 10:56:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:48.675 10:56:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:48.935 /dev/nbd1 00:07:48.935 10:56:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:48.935 10:56:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:48.935 10:56:08 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:48.935 10:56:08 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:48.935 10:56:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:48.935 10:56:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:48.935 10:56:08 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:48.935 10:56:08 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:48.935 10:56:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:48.936 10:56:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:48.936 10:56:08 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:48.936 1+0 records in 00:07:48.936 1+0 records out 00:07:48.936 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000144652 s, 28.3 MB/s 00:07:48.936 10:56:08 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:48.936 10:56:08 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:48.936 10:56:08 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:48.936 10:56:08 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:48.936 10:56:08 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:48.936 10:56:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:48.936 10:56:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:48.936 10:56:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:48.936 10:56:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:48.936 10:56:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:49.196 10:56:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:49.197 { 00:07:49.197 "nbd_device": "/dev/nbd0", 00:07:49.197 "bdev_name": "Malloc0" 00:07:49.197 }, 00:07:49.197 { 00:07:49.197 "nbd_device": "/dev/nbd1", 00:07:49.197 "bdev_name": "Malloc1" 00:07:49.197 } 00:07:49.197 ]' 00:07:49.197 10:56:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:49.197 { 00:07:49.197 "nbd_device": "/dev/nbd0", 00:07:49.197 "bdev_name": "Malloc0" 00:07:49.197 }, 00:07:49.197 { 00:07:49.197 "nbd_device": "/dev/nbd1", 00:07:49.197 "bdev_name": "Malloc1" 00:07:49.197 } 00:07:49.197 ]' 00:07:49.197 10:56:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:49.197 10:56:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:49.197 /dev/nbd1' 00:07:49.197 10:56:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:49.197 /dev/nbd1' 00:07:49.197 10:56:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:49.197 10:56:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:49.197 10:56:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:49.197 10:56:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:49.197 10:56:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:49.197 10:56:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:49.197 10:56:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:49.197 10:56:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:49.197 10:56:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:49.197 10:56:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:49.197 10:56:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:49.197 10:56:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:49.197 256+0 records in 00:07:49.197 256+0 records out 00:07:49.197 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102979 s, 102 MB/s 00:07:49.197 10:56:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:49.197 10:56:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:49.197 256+0 records in 00:07:49.197 256+0 records out 00:07:49.197 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136754 s, 76.7 MB/s 00:07:49.197 10:56:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:49.197 10:56:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:49.197 256+0 records in 00:07:49.197 256+0 records out 00:07:49.197 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147187 s, 71.2 MB/s 00:07:49.197 10:56:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:49.197 10:56:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:49.197 10:56:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:49.197 10:56:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:49.197 10:56:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:49.197 10:56:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:49.197 10:56:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:49.197 10:56:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:49.197 10:56:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:49.197 10:56:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:49.197 10:56:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:49.197 10:56:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:49.197 10:56:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:49.197 10:56:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:49.197 10:56:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:49.197 10:56:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:49.197 10:56:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:49.197 10:56:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:49.197 10:56:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:49.458 10:56:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:49.458 10:56:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:49.458 10:56:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:49.458 10:56:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:49.458 10:56:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:49.458 10:56:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:49.458 10:56:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:49.458 10:56:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:49.458 10:56:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:49.458 10:56:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:49.458 10:56:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:49.458 10:56:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:49.458 10:56:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:49.458 10:56:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:49.458 10:56:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:49.458 10:56:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:49.458 10:56:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:49.458 10:56:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:49.458 10:56:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:49.458 10:56:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:49.458 10:56:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:49.717 10:56:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:49.717 10:56:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:49.717 10:56:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:49.717 10:56:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:49.717 10:56:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:49.717 10:56:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:49.717 10:56:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:49.717 10:56:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:49.717 10:56:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:49.717 10:56:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:49.718 10:56:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:49.718 10:56:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:49.718 10:56:09 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:49.977 10:56:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:50.237 [2024-07-26 10:56:09.514746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:50.237 [2024-07-26 10:56:09.580972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.237 [2024-07-26 10:56:09.580975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.237 [2024-07-26 10:56:09.622603] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:50.237 [2024-07-26 10:56:09.622645] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:53.532 10:56:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:53.532 10:56:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:53.532 spdk_app_start Round 2 00:07:53.532 10:56:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1283859 /var/tmp/spdk-nbd.sock 00:07:53.532 10:56:12 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1283859 ']' 00:07:53.532 10:56:12 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:53.532 10:56:12 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:53.532 10:56:12 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:53.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:53.532 10:56:12 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:53.532 10:56:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:53.532 10:56:12 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:53.532 10:56:12 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:53.532 10:56:12 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:53.532 Malloc0 00:07:53.532 10:56:12 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:53.532 Malloc1 00:07:53.532 10:56:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:53.532 10:56:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:53.532 10:56:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:53.532 10:56:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:53.532 10:56:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:53.532 10:56:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:53.532 10:56:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:53.532 10:56:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:53.532 10:56:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:53.532 10:56:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:53.532 10:56:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:53.532 10:56:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:53.532 10:56:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:53.532 10:56:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:53.532 10:56:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:53.532 10:56:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:53.792 /dev/nbd0 00:07:53.792 10:56:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:53.792 10:56:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:53.792 10:56:13 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:53.792 10:56:13 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:53.792 10:56:13 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:53.792 10:56:13 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:53.792 10:56:13 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:53.792 10:56:13 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:53.792 10:56:13 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:53.792 10:56:13 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:53.792 10:56:13 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:53.792 1+0 records in 00:07:53.792 1+0 records out 00:07:53.792 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000194396 s, 21.1 MB/s 00:07:53.792 10:56:13 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:53.792 10:56:13 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:53.792 10:56:13 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:53.792 10:56:13 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:53.792 10:56:13 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:53.792 10:56:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:53.792 10:56:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:53.792 10:56:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:53.792 /dev/nbd1 00:07:53.792 10:56:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:53.792 10:56:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:53.792 10:56:13 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:53.792 10:56:13 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:53.792 10:56:13 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:53.792 10:56:13 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:53.792 10:56:13 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:53.792 10:56:13 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:53.792 10:56:13 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:53.792 10:56:13 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:53.792 10:56:13 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:54.052 1+0 records in 00:07:54.052 1+0 records out 00:07:54.052 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239443 s, 17.1 MB/s 00:07:54.052 10:56:13 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:54.052 10:56:13 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:54.052 10:56:13 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:54.052 10:56:13 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:54.052 10:56:13 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:54.052 10:56:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:54.052 10:56:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:54.052 10:56:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:54.052 10:56:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:54.052 10:56:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:54.052 10:56:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:54.052 { 00:07:54.052 "nbd_device": "/dev/nbd0", 00:07:54.052 "bdev_name": "Malloc0" 00:07:54.052 }, 00:07:54.052 { 00:07:54.052 "nbd_device": "/dev/nbd1", 00:07:54.052 "bdev_name": "Malloc1" 00:07:54.052 } 00:07:54.052 ]' 00:07:54.052 10:56:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:54.052 { 00:07:54.052 "nbd_device": "/dev/nbd0", 00:07:54.052 "bdev_name": "Malloc0" 00:07:54.052 }, 00:07:54.052 { 00:07:54.052 "nbd_device": "/dev/nbd1", 00:07:54.052 "bdev_name": "Malloc1" 00:07:54.052 } 00:07:54.052 ]' 00:07:54.052 10:56:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:54.052 10:56:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:54.052 /dev/nbd1' 00:07:54.052 10:56:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:54.052 /dev/nbd1' 00:07:54.052 10:56:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:54.052 10:56:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:54.052 10:56:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:54.052 10:56:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:54.052 10:56:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:54.052 10:56:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:54.052 10:56:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:54.052 10:56:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:54.052 10:56:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:54.052 10:56:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:54.052 10:56:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:54.052 10:56:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:54.052 256+0 records in 00:07:54.052 256+0 records out 00:07:54.052 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102936 s, 102 MB/s 00:07:54.052 10:56:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:54.052 10:56:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:54.313 256+0 records in 00:07:54.313 256+0 records out 00:07:54.313 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138119 s, 75.9 MB/s 00:07:54.313 10:56:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:54.313 10:56:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:54.313 256+0 records in 00:07:54.313 256+0 records out 00:07:54.313 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145208 s, 72.2 MB/s 00:07:54.313 10:56:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:54.313 10:56:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:54.313 10:56:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:54.313 10:56:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:54.313 10:56:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:54.313 10:56:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:54.313 10:56:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:54.313 10:56:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:54.313 10:56:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:54.313 10:56:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:54.313 10:56:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:54.313 10:56:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:54.313 10:56:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:54.313 10:56:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:54.313 10:56:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:54.313 10:56:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:54.313 10:56:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:54.313 10:56:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:54.313 10:56:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:54.313 10:56:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:54.313 10:56:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:54.313 10:56:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:54.313 10:56:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:54.313 10:56:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:54.313 10:56:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:54.313 10:56:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:54.313 10:56:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:54.313 10:56:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:54.313 10:56:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:54.572 10:56:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:54.572 10:56:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:54.572 10:56:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:54.572 10:56:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:54.572 10:56:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:54.572 10:56:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:54.572 10:56:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:54.572 10:56:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:54.572 10:56:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:54.572 10:56:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:54.572 10:56:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:54.832 10:56:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:54.832 10:56:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:54.832 10:56:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:54.832 10:56:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:54.832 10:56:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:54.832 10:56:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:54.832 10:56:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:54.832 10:56:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:54.832 10:56:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:54.832 10:56:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:54.832 10:56:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:54.832 10:56:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:54.832 10:56:14 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:55.092 10:56:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:55.092 [2024-07-26 10:56:14.559796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:55.352 [2024-07-26 10:56:14.634262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:55.352 [2024-07-26 10:56:14.634263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.352 [2024-07-26 10:56:14.675425] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:55.352 [2024-07-26 10:56:14.675466] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:57.894 10:56:17 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1283859 /var/tmp/spdk-nbd.sock 00:07:57.894 10:56:17 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1283859 ']' 00:07:57.894 10:56:17 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:57.894 10:56:17 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:57.894 10:56:17 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:57.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:57.894 10:56:17 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:57.894 10:56:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:58.154 10:56:17 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:58.154 10:56:17 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:58.154 10:56:17 event.app_repeat -- event/event.sh@39 -- # killprocess 1283859 00:07:58.154 10:56:17 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 1283859 ']' 00:07:58.154 10:56:17 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 1283859 00:07:58.154 10:56:17 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:58.154 10:56:17 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:58.154 10:56:17 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1283859 00:07:58.154 10:56:17 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:58.154 10:56:17 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:58.154 10:56:17 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1283859' 00:07:58.154 killing process with pid 1283859 00:07:58.154 10:56:17 event.app_repeat -- common/autotest_common.sh@969 -- # kill 1283859 00:07:58.154 10:56:17 event.app_repeat -- common/autotest_common.sh@974 -- # wait 1283859 00:07:58.414 spdk_app_start is called in Round 0. 00:07:58.414 Shutdown signal received, stop current app iteration 00:07:58.414 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:07:58.414 spdk_app_start is called in Round 1. 00:07:58.414 Shutdown signal received, stop current app iteration 00:07:58.414 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:07:58.414 spdk_app_start is called in Round 2. 00:07:58.414 Shutdown signal received, stop current app iteration 00:07:58.414 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:07:58.414 spdk_app_start is called in Round 3. 00:07:58.414 Shutdown signal received, stop current app iteration 00:07:58.414 10:56:17 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:58.414 10:56:17 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:58.414 00:07:58.414 real 0m16.175s 00:07:58.414 user 0m35.171s 00:07:58.414 sys 0m2.369s 00:07:58.414 10:56:17 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.414 10:56:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:58.414 ************************************ 00:07:58.414 END TEST app_repeat 00:07:58.414 ************************************ 00:07:58.414 10:56:17 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:58.414 10:56:17 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:58.414 10:56:17 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:58.414 10:56:17 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.414 10:56:17 event -- common/autotest_common.sh@10 -- # set +x 00:07:58.414 ************************************ 00:07:58.414 START TEST cpu_locks 00:07:58.414 ************************************ 00:07:58.414 10:56:17 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:58.675 * Looking for test storage... 00:07:58.675 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:58.675 10:56:17 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:58.675 10:56:17 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:58.675 10:56:17 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:58.675 10:56:17 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:58.675 10:56:17 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:58.675 10:56:17 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.675 10:56:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:58.675 ************************************ 00:07:58.675 START TEST default_locks 00:07:58.675 ************************************ 00:07:58.675 10:56:17 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:07:58.675 10:56:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1286869 00:07:58.675 10:56:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:58.675 10:56:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1286869 00:07:58.675 10:56:17 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1286869 ']' 00:07:58.675 10:56:17 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.675 10:56:17 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:58.675 10:56:17 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.675 10:56:17 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:58.675 10:56:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:58.675 [2024-07-26 10:56:18.004439] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:58.675 [2024-07-26 10:56:18.004478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1286869 ] 00:07:58.675 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.675 [2024-07-26 10:56:18.058269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.675 [2024-07-26 10:56:18.131148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.616 10:56:18 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:59.616 10:56:18 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:59.616 10:56:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1286869 00:07:59.616 10:56:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1286869 00:07:59.616 10:56:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:59.616 lslocks: write error 00:07:59.616 10:56:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1286869 00:07:59.616 10:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 1286869 ']' 00:07:59.616 10:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 1286869 00:07:59.616 10:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:59.616 10:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:59.616 10:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1286869 00:07:59.616 10:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:59.616 10:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:59.616 10:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1286869' 00:07:59.616 killing process with pid 1286869 00:07:59.617 10:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 1286869 00:07:59.617 10:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 1286869 00:07:59.876 10:56:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1286869 00:07:59.876 10:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:59.877 10:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1286869 00:07:59.877 10:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:59.877 10:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.877 10:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:59.877 10:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.877 10:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 1286869 00:07:59.877 10:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1286869 ']' 00:07:59.877 10:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.877 10:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:59.877 10:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.877 10:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:59.877 10:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:59.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1286869) - No such process 00:07:59.877 ERROR: process (pid: 1286869) is no longer running 00:07:59.877 10:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:59.877 10:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:07:59.877 10:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:59.877 10:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:59.877 10:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:59.877 10:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:00.140 10:56:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:08:00.140 10:56:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:00.140 10:56:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:08:00.140 10:56:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:00.140 00:08:00.140 real 0m1.420s 00:08:00.140 user 0m1.488s 00:08:00.140 sys 0m0.452s 00:08:00.140 10:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.140 10:56:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:00.140 ************************************ 00:08:00.140 END TEST default_locks 00:08:00.140 ************************************ 00:08:00.140 10:56:19 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:00.140 10:56:19 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:00.140 10:56:19 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.140 10:56:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:00.140 ************************************ 00:08:00.140 START TEST default_locks_via_rpc 00:08:00.140 ************************************ 00:08:00.140 10:56:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:08:00.140 10:56:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1287127 00:08:00.140 10:56:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1287127 00:08:00.140 10:56:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:00.140 10:56:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1287127 ']' 00:08:00.140 10:56:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.140 10:56:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:00.140 10:56:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.140 10:56:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:00.140 10:56:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.140 [2024-07-26 10:56:19.492702] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:00.140 [2024-07-26 10:56:19.492746] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1287127 ] 00:08:00.140 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.140 [2024-07-26 10:56:19.547475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.140 [2024-07-26 10:56:19.614936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.083 10:56:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:01.083 10:56:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:01.083 10:56:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:01.083 10:56:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.083 10:56:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.083 10:56:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.083 10:56:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:08:01.083 10:56:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:01.083 10:56:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:08:01.083 10:56:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:01.083 10:56:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:01.083 10:56:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.083 10:56:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.083 10:56:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.083 10:56:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1287127 00:08:01.083 10:56:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1287127 00:08:01.083 10:56:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:01.343 10:56:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1287127 00:08:01.343 10:56:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 1287127 ']' 00:08:01.343 10:56:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 1287127 00:08:01.343 10:56:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:08:01.343 10:56:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:01.343 10:56:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1287127 00:08:01.343 10:56:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:01.343 10:56:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:01.343 10:56:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1287127' 00:08:01.343 killing process with pid 1287127 00:08:01.343 10:56:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 1287127 00:08:01.343 10:56:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 1287127 00:08:01.603 00:08:01.603 real 0m1.535s 00:08:01.603 user 0m1.614s 00:08:01.603 sys 0m0.502s 00:08:01.603 10:56:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:01.603 10:56:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.603 ************************************ 00:08:01.603 END TEST default_locks_via_rpc 00:08:01.603 ************************************ 00:08:01.603 10:56:21 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:01.603 10:56:21 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:01.603 10:56:21 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:01.603 10:56:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:01.603 ************************************ 00:08:01.603 START TEST non_locking_app_on_locked_coremask 00:08:01.603 ************************************ 00:08:01.603 10:56:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:08:01.603 10:56:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1287394 00:08:01.603 10:56:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1287394 /var/tmp/spdk.sock 00:08:01.603 10:56:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:01.603 10:56:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1287394 ']' 00:08:01.603 10:56:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.603 10:56:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:01.603 10:56:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.603 10:56:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:01.603 10:56:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:01.603 [2024-07-26 10:56:21.086162] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:01.603 [2024-07-26 10:56:21.086206] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1287394 ] 00:08:01.861 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.862 [2024-07-26 10:56:21.137828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.862 [2024-07-26 10:56:21.217630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.430 10:56:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:02.430 10:56:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:02.430 10:56:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:02.430 10:56:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1287616 00:08:02.430 10:56:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1287616 /var/tmp/spdk2.sock 00:08:02.430 10:56:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1287616 ']' 00:08:02.430 10:56:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:02.430 10:56:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:02.430 10:56:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:02.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:02.430 10:56:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:02.430 10:56:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:02.430 [2024-07-26 10:56:21.916512] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:02.430 [2024-07-26 10:56:21.916558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1287616 ] 00:08:02.696 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.696 [2024-07-26 10:56:21.985666] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:02.696 [2024-07-26 10:56:21.985690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.696 [2024-07-26 10:56:22.137960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.264 10:56:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:03.264 10:56:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:03.264 10:56:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1287394 00:08:03.264 10:56:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1287394 00:08:03.264 10:56:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:03.834 lslocks: write error 00:08:03.834 10:56:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1287394 00:08:03.834 10:56:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1287394 ']' 00:08:03.834 10:56:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1287394 00:08:03.834 10:56:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:03.834 10:56:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:03.834 10:56:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1287394 00:08:03.834 10:56:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:03.834 10:56:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:03.834 10:56:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1287394' 00:08:03.834 killing process with pid 1287394 00:08:03.834 10:56:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1287394 00:08:03.834 10:56:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1287394 00:08:04.402 10:56:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1287616 00:08:04.402 10:56:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1287616 ']' 00:08:04.402 10:56:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1287616 00:08:04.402 10:56:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:04.402 10:56:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:04.402 10:56:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1287616 00:08:04.402 10:56:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:04.402 10:56:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:04.402 10:56:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1287616' 00:08:04.402 killing process with pid 1287616 00:08:04.402 10:56:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1287616 00:08:04.402 10:56:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1287616 00:08:04.662 00:08:04.662 real 0m3.101s 00:08:04.662 user 0m3.302s 00:08:04.662 sys 0m0.871s 00:08:04.662 10:56:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:04.662 10:56:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:04.662 ************************************ 00:08:04.662 END TEST non_locking_app_on_locked_coremask 00:08:04.662 ************************************ 00:08:04.922 10:56:24 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:04.922 10:56:24 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:04.922 10:56:24 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:04.922 10:56:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:04.922 ************************************ 00:08:04.922 START TEST locking_app_on_unlocked_coremask 00:08:04.922 ************************************ 00:08:04.922 10:56:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:08:04.922 10:56:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:04.922 10:56:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1287969 00:08:04.922 10:56:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1287969 /var/tmp/spdk.sock 00:08:04.922 10:56:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1287969 ']' 00:08:04.922 10:56:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.922 10:56:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:04.922 10:56:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.922 10:56:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:04.922 10:56:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:04.922 [2024-07-26 10:56:24.236061] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:04.922 [2024-07-26 10:56:24.236102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1287969 ] 00:08:04.922 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.922 [2024-07-26 10:56:24.291019] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:04.922 [2024-07-26 10:56:24.291050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.922 [2024-07-26 10:56:24.370616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.858 10:56:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:05.858 10:56:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:05.858 10:56:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:05.858 10:56:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1288122 00:08:05.858 10:56:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1288122 /var/tmp/spdk2.sock 00:08:05.858 10:56:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1288122 ']' 00:08:05.858 10:56:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:05.858 10:56:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:05.858 10:56:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:05.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:05.858 10:56:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:05.858 10:56:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:05.858 [2024-07-26 10:56:25.082154] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:05.858 [2024-07-26 10:56:25.082201] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1288122 ] 00:08:05.858 EAL: No free 2048 kB hugepages reported on node 1 00:08:05.858 [2024-07-26 10:56:25.156808] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.858 [2024-07-26 10:56:25.310500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.427 10:56:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:06.427 10:56:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:06.427 10:56:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1288122 00:08:06.427 10:56:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1288122 00:08:06.427 10:56:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:06.688 lslocks: write error 00:08:06.688 10:56:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1287969 00:08:06.688 10:56:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1287969 ']' 00:08:06.688 10:56:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1287969 00:08:06.688 10:56:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:06.688 10:56:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:06.688 10:56:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1287969 00:08:06.688 10:56:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:06.688 10:56:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:06.688 10:56:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1287969' 00:08:06.688 killing process with pid 1287969 00:08:06.688 10:56:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1287969 00:08:06.688 10:56:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1287969 00:08:07.626 10:56:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1288122 00:08:07.626 10:56:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1288122 ']' 00:08:07.626 10:56:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1288122 00:08:07.626 10:56:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:07.626 10:56:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:07.626 10:56:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1288122 00:08:07.626 10:56:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:07.626 10:56:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:07.626 10:56:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1288122' 00:08:07.626 killing process with pid 1288122 00:08:07.626 10:56:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1288122 00:08:07.626 10:56:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1288122 00:08:07.886 00:08:07.886 real 0m2.937s 00:08:07.886 user 0m3.144s 00:08:07.886 sys 0m0.788s 00:08:07.886 10:56:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:07.886 10:56:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:07.886 ************************************ 00:08:07.886 END TEST locking_app_on_unlocked_coremask 00:08:07.886 ************************************ 00:08:07.886 10:56:27 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:07.886 10:56:27 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:07.886 10:56:27 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:07.886 10:56:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:07.886 ************************************ 00:08:07.886 START TEST locking_app_on_locked_coremask 00:08:07.886 ************************************ 00:08:07.886 10:56:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:08:07.886 10:56:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1288612 00:08:07.886 10:56:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1288612 /var/tmp/spdk.sock 00:08:07.886 10:56:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:07.886 10:56:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1288612 ']' 00:08:07.886 10:56:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.886 10:56:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:07.886 10:56:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.886 10:56:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:07.886 10:56:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:07.886 [2024-07-26 10:56:27.256859] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:07.886 [2024-07-26 10:56:27.256899] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1288612 ] 00:08:07.886 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.886 [2024-07-26 10:56:27.309019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.146 [2024-07-26 10:56:27.389870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.715 10:56:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:08.715 10:56:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:08.715 10:56:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:08.715 10:56:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1288625 00:08:08.715 10:56:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1288625 /var/tmp/spdk2.sock 00:08:08.715 10:56:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:08.715 10:56:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1288625 /var/tmp/spdk2.sock 00:08:08.715 10:56:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:08.715 10:56:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:08.715 10:56:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:08.715 10:56:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:08.715 10:56:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1288625 /var/tmp/spdk2.sock 00:08:08.715 10:56:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1288625 ']' 00:08:08.715 10:56:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:08.715 10:56:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:08.715 10:56:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:08.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:08.715 10:56:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:08.715 10:56:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:08.715 [2024-07-26 10:56:28.082768] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:08.715 [2024-07-26 10:56:28.082813] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1288625 ] 00:08:08.715 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.715 [2024-07-26 10:56:28.151944] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1288612 has claimed it. 00:08:08.715 [2024-07-26 10:56:28.151976] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:09.284 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1288625) - No such process 00:08:09.284 ERROR: process (pid: 1288625) is no longer running 00:08:09.284 10:56:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:09.284 10:56:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:08:09.284 10:56:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:09.284 10:56:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:09.284 10:56:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:09.284 10:56:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:09.284 10:56:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1288612 00:08:09.284 10:56:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1288612 00:08:09.284 10:56:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:09.853 lslocks: write error 00:08:09.853 10:56:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1288612 00:08:09.853 10:56:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1288612 ']' 00:08:09.853 10:56:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1288612 00:08:09.853 10:56:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:09.853 10:56:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:09.853 10:56:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1288612 00:08:09.853 10:56:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:09.853 10:56:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:09.853 10:56:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1288612' 00:08:09.853 killing process with pid 1288612 00:08:09.853 10:56:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1288612 00:08:09.853 10:56:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1288612 00:08:10.112 00:08:10.112 real 0m2.226s 00:08:10.112 user 0m2.450s 00:08:10.112 sys 0m0.602s 00:08:10.112 10:56:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:10.112 10:56:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:10.112 ************************************ 00:08:10.112 END TEST locking_app_on_locked_coremask 00:08:10.112 ************************************ 00:08:10.112 10:56:29 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:10.112 10:56:29 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:10.112 10:56:29 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:10.112 10:56:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:10.112 ************************************ 00:08:10.112 START TEST locking_overlapped_coremask 00:08:10.113 ************************************ 00:08:10.113 10:56:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:08:10.113 10:56:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1288889 00:08:10.113 10:56:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1288889 /var/tmp/spdk.sock 00:08:10.113 10:56:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:08:10.113 10:56:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1288889 ']' 00:08:10.113 10:56:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.113 10:56:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:10.113 10:56:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.113 10:56:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:10.113 10:56:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:10.113 [2024-07-26 10:56:29.550141] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:10.113 [2024-07-26 10:56:29.550182] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1288889 ] 00:08:10.113 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.113 [2024-07-26 10:56:29.605829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:10.373 [2024-07-26 10:56:29.685968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.373 [2024-07-26 10:56:29.686069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.373 [2024-07-26 10:56:29.686071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.943 10:56:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:10.943 10:56:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:10.943 10:56:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1289121 00:08:10.943 10:56:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1289121 /var/tmp/spdk2.sock 00:08:10.943 10:56:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:10.943 10:56:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:10.943 10:56:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1289121 /var/tmp/spdk2.sock 00:08:10.943 10:56:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:10.943 10:56:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.943 10:56:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:10.943 10:56:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.943 10:56:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1289121 /var/tmp/spdk2.sock 00:08:10.943 10:56:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1289121 ']' 00:08:10.943 10:56:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:10.943 10:56:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:10.943 10:56:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:10.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:10.943 10:56:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:10.943 10:56:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:10.943 [2024-07-26 10:56:30.390480] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:10.943 [2024-07-26 10:56:30.390525] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1289121 ] 00:08:10.943 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.203 [2024-07-26 10:56:30.466404] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1288889 has claimed it. 00:08:11.203 [2024-07-26 10:56:30.466446] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:11.772 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1289121) - No such process 00:08:11.772 ERROR: process (pid: 1289121) is no longer running 00:08:11.772 10:56:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:11.772 10:56:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:08:11.772 10:56:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:11.772 10:56:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:11.772 10:56:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:11.772 10:56:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:11.772 10:56:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:11.772 10:56:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:11.772 10:56:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:11.772 10:56:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:11.772 10:56:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1288889 00:08:11.772 10:56:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 1288889 ']' 00:08:11.772 10:56:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 1288889 00:08:11.772 10:56:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:08:11.772 10:56:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:11.772 10:56:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1288889 00:08:11.772 10:56:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:11.772 10:56:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:11.772 10:56:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1288889' 00:08:11.772 killing process with pid 1288889 00:08:11.772 10:56:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 1288889 00:08:11.772 10:56:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 1288889 00:08:12.032 00:08:12.032 real 0m1.877s 00:08:12.032 user 0m5.270s 00:08:12.032 sys 0m0.406s 00:08:12.032 10:56:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:12.032 10:56:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:12.032 ************************************ 00:08:12.032 END TEST locking_overlapped_coremask 00:08:12.032 ************************************ 00:08:12.032 10:56:31 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:12.032 10:56:31 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:12.032 10:56:31 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:12.032 10:56:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:12.032 ************************************ 00:08:12.032 START TEST locking_overlapped_coremask_via_rpc 00:08:12.032 ************************************ 00:08:12.032 10:56:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:08:12.032 10:56:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1289377 00:08:12.032 10:56:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1289377 /var/tmp/spdk.sock 00:08:12.032 10:56:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:12.032 10:56:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1289377 ']' 00:08:12.032 10:56:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.032 10:56:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:12.032 10:56:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.032 10:56:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:12.032 10:56:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.032 [2024-07-26 10:56:31.491452] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:12.032 [2024-07-26 10:56:31.491496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1289377 ] 00:08:12.032 EAL: No free 2048 kB hugepages reported on node 1 00:08:12.292 [2024-07-26 10:56:31.547490] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:12.292 [2024-07-26 10:56:31.547515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:12.292 [2024-07-26 10:56:31.629490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.292 [2024-07-26 10:56:31.629575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.292 [2024-07-26 10:56:31.629585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:12.862 10:56:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:12.862 10:56:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:12.862 10:56:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1289395 00:08:12.862 10:56:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1289395 /var/tmp/spdk2.sock 00:08:12.862 10:56:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:12.862 10:56:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1289395 ']' 00:08:12.862 10:56:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:12.862 10:56:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:12.862 10:56:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:12.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:12.862 10:56:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:12.862 10:56:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.862 [2024-07-26 10:56:32.345387] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:12.863 [2024-07-26 10:56:32.345435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1289395 ] 00:08:13.123 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.123 [2024-07-26 10:56:32.422588] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:13.123 [2024-07-26 10:56:32.422618] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:13.123 [2024-07-26 10:56:32.581194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:13.123 [2024-07-26 10:56:32.581305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:13.123 [2024-07-26 10:56:32.581306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:13.694 10:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:13.694 10:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:13.694 10:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:13.694 10:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.694 10:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:13.694 10:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.694 10:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:13.694 10:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:13.694 10:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:13.694 10:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:13.694 10:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:13.694 10:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:13.694 10:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:13.694 10:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:13.694 10:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.694 10:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:13.694 [2024-07-26 10:56:33.169117] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1289377 has claimed it. 00:08:13.694 request: 00:08:13.694 { 00:08:13.694 "method": "framework_enable_cpumask_locks", 00:08:13.694 "req_id": 1 00:08:13.694 } 00:08:13.694 Got JSON-RPC error response 00:08:13.694 response: 00:08:13.694 { 00:08:13.694 "code": -32603, 00:08:13.694 "message": "Failed to claim CPU core: 2" 00:08:13.694 } 00:08:13.694 10:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:13.694 10:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:13.694 10:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:13.694 10:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:13.694 10:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:13.694 10:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1289377 /var/tmp/spdk.sock 00:08:13.694 10:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1289377 ']' 00:08:13.694 10:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.694 10:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:13.694 10:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.694 10:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:13.694 10:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:13.971 10:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:13.971 10:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:13.971 10:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1289395 /var/tmp/spdk2.sock 00:08:13.972 10:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1289395 ']' 00:08:13.972 10:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:13.972 10:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:13.972 10:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:13.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:13.972 10:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:13.972 10:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.233 10:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:14.233 10:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:14.233 10:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:14.233 10:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:14.233 10:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:14.233 10:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:14.233 00:08:14.233 real 0m2.112s 00:08:14.233 user 0m0.887s 00:08:14.233 sys 0m0.158s 00:08:14.233 10:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:14.233 10:56:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:14.233 ************************************ 00:08:14.233 END TEST locking_overlapped_coremask_via_rpc 00:08:14.233 ************************************ 00:08:14.233 10:56:33 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:14.233 10:56:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1289377 ]] 00:08:14.233 10:56:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1289377 00:08:14.233 10:56:33 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1289377 ']' 00:08:14.233 10:56:33 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1289377 00:08:14.234 10:56:33 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:08:14.234 10:56:33 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:14.234 10:56:33 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1289377 00:08:14.234 10:56:33 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:14.234 10:56:33 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:14.234 10:56:33 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1289377' 00:08:14.234 killing process with pid 1289377 00:08:14.234 10:56:33 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1289377 00:08:14.234 10:56:33 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1289377 00:08:14.493 10:56:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1289395 ]] 00:08:14.493 10:56:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1289395 00:08:14.493 10:56:33 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1289395 ']' 00:08:14.493 10:56:33 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1289395 00:08:14.493 10:56:33 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:08:14.493 10:56:33 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:14.493 10:56:33 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1289395 00:08:14.493 10:56:33 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:08:14.493 10:56:33 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:08:14.493 10:56:33 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1289395' 00:08:14.493 killing process with pid 1289395 00:08:14.493 10:56:33 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1289395 00:08:14.493 10:56:33 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1289395 00:08:14.848 10:56:34 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:14.848 10:56:34 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:14.848 10:56:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1289377 ]] 00:08:14.848 10:56:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1289377 00:08:14.848 10:56:34 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1289377 ']' 00:08:14.848 10:56:34 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1289377 00:08:14.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1289377) - No such process 00:08:14.848 10:56:34 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1289377 is not found' 00:08:14.848 Process with pid 1289377 is not found 00:08:14.848 10:56:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1289395 ]] 00:08:14.848 10:56:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1289395 00:08:14.848 10:56:34 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1289395 ']' 00:08:14.848 10:56:34 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1289395 00:08:14.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1289395) - No such process 00:08:14.848 10:56:34 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1289395 is not found' 00:08:14.848 Process with pid 1289395 is not found 00:08:14.848 10:56:34 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:14.848 00:08:14.848 real 0m16.482s 00:08:14.848 user 0m28.679s 00:08:14.848 sys 0m4.651s 00:08:14.848 10:56:34 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:14.848 10:56:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:14.848 ************************************ 00:08:14.848 END TEST cpu_locks 00:08:14.848 ************************************ 00:08:14.848 00:08:14.848 real 0m41.853s 00:08:14.848 user 1m20.904s 00:08:14.848 sys 0m7.907s 00:08:15.109 10:56:34 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:15.109 10:56:34 event -- common/autotest_common.sh@10 -- # set +x 00:08:15.109 ************************************ 00:08:15.109 END TEST event 00:08:15.109 ************************************ 00:08:15.109 10:56:34 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:08:15.109 10:56:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:15.109 10:56:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:15.109 10:56:34 -- common/autotest_common.sh@10 -- # set +x 00:08:15.109 ************************************ 00:08:15.109 START TEST thread 00:08:15.109 ************************************ 00:08:15.109 10:56:34 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:08:15.109 * Looking for test storage... 00:08:15.109 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:08:15.109 10:56:34 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:15.109 10:56:34 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:15.109 10:56:34 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:15.109 10:56:34 thread -- common/autotest_common.sh@10 -- # set +x 00:08:15.109 ************************************ 00:08:15.109 START TEST thread_poller_perf 00:08:15.109 ************************************ 00:08:15.109 10:56:34 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:15.109 [2024-07-26 10:56:34.552893] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:15.109 [2024-07-26 10:56:34.552967] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1289944 ] 00:08:15.109 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.417 [2024-07-26 10:56:34.608937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.417 [2024-07-26 10:56:34.682049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.417 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:16.356 ====================================== 00:08:16.356 busy:2307883034 (cyc) 00:08:16.356 total_run_count: 408000 00:08:16.356 tsc_hz: 2300000000 (cyc) 00:08:16.356 ====================================== 00:08:16.356 poller_cost: 5656 (cyc), 2459 (nsec) 00:08:16.356 00:08:16.356 real 0m1.225s 00:08:16.356 user 0m1.147s 00:08:16.356 sys 0m0.073s 00:08:16.356 10:56:35 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:16.356 10:56:35 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:16.356 ************************************ 00:08:16.356 END TEST thread_poller_perf 00:08:16.356 ************************************ 00:08:16.356 10:56:35 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:16.356 10:56:35 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:16.356 10:56:35 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:16.356 10:56:35 thread -- common/autotest_common.sh@10 -- # set +x 00:08:16.356 ************************************ 00:08:16.356 START TEST thread_poller_perf 00:08:16.356 ************************************ 00:08:16.356 10:56:35 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:16.356 [2024-07-26 10:56:35.834071] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:16.356 [2024-07-26 10:56:35.834141] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1290195 ] 00:08:16.615 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.615 [2024-07-26 10:56:35.890034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.615 [2024-07-26 10:56:35.961293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.615 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:17.555 ====================================== 00:08:17.555 busy:2301540142 (cyc) 00:08:17.555 total_run_count: 5375000 00:08:17.555 tsc_hz: 2300000000 (cyc) 00:08:17.555 ====================================== 00:08:17.555 poller_cost: 428 (cyc), 186 (nsec) 00:08:17.555 00:08:17.555 real 0m1.218s 00:08:17.555 user 0m1.144s 00:08:17.555 sys 0m0.070s 00:08:17.555 10:56:37 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:17.555 10:56:37 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:17.555 ************************************ 00:08:17.555 END TEST thread_poller_perf 00:08:17.555 ************************************ 00:08:17.816 10:56:37 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:17.816 00:08:17.816 real 0m2.657s 00:08:17.816 user 0m2.376s 00:08:17.816 sys 0m0.289s 00:08:17.816 10:56:37 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:17.816 10:56:37 thread -- common/autotest_common.sh@10 -- # set +x 00:08:17.816 ************************************ 00:08:17.816 END TEST thread 00:08:17.816 ************************************ 00:08:17.816 10:56:37 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:08:17.816 10:56:37 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:17.816 10:56:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:17.816 10:56:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:17.816 10:56:37 -- common/autotest_common.sh@10 -- # set +x 00:08:17.816 ************************************ 00:08:17.816 START TEST app_cmdline 00:08:17.816 ************************************ 00:08:17.816 10:56:37 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:17.816 * Looking for test storage... 00:08:17.816 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:17.816 10:56:37 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:17.816 10:56:37 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1290480 00:08:17.816 10:56:37 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:17.816 10:56:37 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1290480 00:08:17.816 10:56:37 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 1290480 ']' 00:08:17.816 10:56:37 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.816 10:56:37 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:17.816 10:56:37 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.816 10:56:37 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:17.816 10:56:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:17.816 [2024-07-26 10:56:37.260770] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:17.816 [2024-07-26 10:56:37.260816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1290480 ] 00:08:17.816 EAL: No free 2048 kB hugepages reported on node 1 00:08:18.076 [2024-07-26 10:56:37.317012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.076 [2024-07-26 10:56:37.392626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.645 10:56:38 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:18.645 10:56:38 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:08:18.645 10:56:38 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:18.905 { 00:08:18.906 "version": "SPDK v24.09-pre git sha1 704257090", 00:08:18.906 "fields": { 00:08:18.906 "major": 24, 00:08:18.906 "minor": 9, 00:08:18.906 "patch": 0, 00:08:18.906 "suffix": "-pre", 00:08:18.906 "commit": "704257090" 00:08:18.906 } 00:08:18.906 } 00:08:18.906 10:56:38 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:18.906 10:56:38 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:18.906 10:56:38 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:18.906 10:56:38 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:18.906 10:56:38 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:18.906 10:56:38 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:18.906 10:56:38 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.906 10:56:38 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:18.906 10:56:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:18.906 10:56:38 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.906 10:56:38 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:18.906 10:56:38 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:18.906 10:56:38 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:18.906 10:56:38 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:08:18.906 10:56:38 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:18.906 10:56:38 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:18.906 10:56:38 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.906 10:56:38 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:18.906 10:56:38 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.906 10:56:38 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:18.906 10:56:38 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.906 10:56:38 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:18.906 10:56:38 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:18.906 10:56:38 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:19.165 request: 00:08:19.165 { 00:08:19.165 "method": "env_dpdk_get_mem_stats", 00:08:19.165 "req_id": 1 00:08:19.165 } 00:08:19.165 Got JSON-RPC error response 00:08:19.165 response: 00:08:19.166 { 00:08:19.166 "code": -32601, 00:08:19.166 "message": "Method not found" 00:08:19.166 } 00:08:19.166 10:56:38 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:08:19.166 10:56:38 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:19.166 10:56:38 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:19.166 10:56:38 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:19.166 10:56:38 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1290480 00:08:19.166 10:56:38 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 1290480 ']' 00:08:19.166 10:56:38 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 1290480 00:08:19.166 10:56:38 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:08:19.166 10:56:38 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:19.166 10:56:38 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1290480 00:08:19.166 10:56:38 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:19.166 10:56:38 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:19.166 10:56:38 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1290480' 00:08:19.166 killing process with pid 1290480 00:08:19.166 10:56:38 app_cmdline -- common/autotest_common.sh@969 -- # kill 1290480 00:08:19.166 10:56:38 app_cmdline -- common/autotest_common.sh@974 -- # wait 1290480 00:08:19.426 00:08:19.426 real 0m1.672s 00:08:19.426 user 0m1.995s 00:08:19.426 sys 0m0.428s 00:08:19.426 10:56:38 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:19.426 10:56:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:19.426 ************************************ 00:08:19.426 END TEST app_cmdline 00:08:19.426 ************************************ 00:08:19.426 10:56:38 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:19.426 10:56:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:19.426 10:56:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.426 10:56:38 -- common/autotest_common.sh@10 -- # set +x 00:08:19.426 ************************************ 00:08:19.426 START TEST version 00:08:19.426 ************************************ 00:08:19.426 10:56:38 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:19.687 * Looking for test storage... 00:08:19.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:19.687 10:56:38 version -- app/version.sh@17 -- # get_header_version major 00:08:19.687 10:56:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:19.687 10:56:38 version -- app/version.sh@14 -- # cut -f2 00:08:19.687 10:56:38 version -- app/version.sh@14 -- # tr -d '"' 00:08:19.687 10:56:38 version -- app/version.sh@17 -- # major=24 00:08:19.687 10:56:38 version -- app/version.sh@18 -- # get_header_version minor 00:08:19.687 10:56:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:19.687 10:56:38 version -- app/version.sh@14 -- # cut -f2 00:08:19.687 10:56:38 version -- app/version.sh@14 -- # tr -d '"' 00:08:19.687 10:56:38 version -- app/version.sh@18 -- # minor=9 00:08:19.687 10:56:38 version -- app/version.sh@19 -- # get_header_version patch 00:08:19.687 10:56:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:19.687 10:56:38 version -- app/version.sh@14 -- # cut -f2 00:08:19.687 10:56:38 version -- app/version.sh@14 -- # tr -d '"' 00:08:19.687 10:56:38 version -- app/version.sh@19 -- # patch=0 00:08:19.687 10:56:38 version -- app/version.sh@20 -- # get_header_version suffix 00:08:19.687 10:56:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:19.687 10:56:38 version -- app/version.sh@14 -- # cut -f2 00:08:19.687 10:56:38 version -- app/version.sh@14 -- # tr -d '"' 00:08:19.687 10:56:38 version -- app/version.sh@20 -- # suffix=-pre 00:08:19.687 10:56:38 version -- app/version.sh@22 -- # version=24.9 00:08:19.687 10:56:38 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:19.687 10:56:38 version -- app/version.sh@28 -- # version=24.9rc0 00:08:19.687 10:56:38 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:19.687 10:56:38 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:19.687 10:56:39 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:19.687 10:56:39 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:19.687 00:08:19.687 real 0m0.155s 00:08:19.687 user 0m0.089s 00:08:19.687 sys 0m0.105s 00:08:19.687 10:56:39 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:19.687 10:56:39 version -- common/autotest_common.sh@10 -- # set +x 00:08:19.687 ************************************ 00:08:19.687 END TEST version 00:08:19.687 ************************************ 00:08:19.687 10:56:39 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:08:19.687 10:56:39 -- spdk/autotest.sh@202 -- # uname -s 00:08:19.687 10:56:39 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:08:19.687 10:56:39 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:08:19.687 10:56:39 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:08:19.687 10:56:39 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:08:19.687 10:56:39 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:08:19.687 10:56:39 -- spdk/autotest.sh@264 -- # timing_exit lib 00:08:19.687 10:56:39 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:19.687 10:56:39 -- common/autotest_common.sh@10 -- # set +x 00:08:19.687 10:56:39 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:08:19.687 10:56:39 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:08:19.687 10:56:39 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:08:19.687 10:56:39 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:08:19.687 10:56:39 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:08:19.687 10:56:39 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:08:19.687 10:56:39 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:19.687 10:56:39 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:19.687 10:56:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.687 10:56:39 -- common/autotest_common.sh@10 -- # set +x 00:08:19.687 ************************************ 00:08:19.687 START TEST nvmf_tcp 00:08:19.687 ************************************ 00:08:19.687 10:56:39 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:19.948 * Looking for test storage... 00:08:19.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:19.948 10:56:39 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:19.948 10:56:39 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:19.948 10:56:39 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:19.948 10:56:39 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:19.948 10:56:39 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.948 10:56:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:19.948 ************************************ 00:08:19.948 START TEST nvmf_target_core 00:08:19.948 ************************************ 00:08:19.948 10:56:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:19.948 * Looking for test storage... 00:08:19.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:19.948 10:56:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:08:19.948 10:56:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:19.948 10:56:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:19.948 10:56:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:08:19.948 10:56:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:19.948 10:56:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:19.948 10:56:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:19.948 10:56:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:19.948 10:56:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:19.948 10:56:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:19.948 10:56:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:19.948 10:56:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:19.948 10:56:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:19.948 10:56:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:19.948 10:56:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:19.948 10:56:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:19.948 10:56:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:19.948 10:56:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:19.948 10:56:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:19.948 10:56:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:19.948 10:56:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:19.948 10:56:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.948 10:56:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.948 10:56:39 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.948 10:56:39 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.948 10:56:39 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.948 10:56:39 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.948 10:56:39 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:08:19.948 10:56:39 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.948 10:56:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:08:19.948 10:56:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:19.948 10:56:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:19.948 10:56:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:19.948 10:56:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:19.948 10:56:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:19.948 10:56:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:19.948 10:56:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:19.948 10:56:39 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:19.948 10:56:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:19.948 10:56:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:08:19.948 10:56:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:08:19.948 10:56:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:19.949 10:56:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:19.949 10:56:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.949 10:56:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:19.949 ************************************ 00:08:19.949 START TEST nvmf_abort 00:08:19.949 ************************************ 00:08:19.949 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:20.209 * Looking for test storage... 00:08:20.209 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:20.209 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:20.209 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:20.209 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:20.209 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:20.209 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:20.209 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:20.209 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:20.209 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:20.209 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:20.209 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:20.209 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:20.210 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:20.210 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:20.210 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:20.210 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:20.210 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:20.210 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:20.210 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:20.210 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:20.210 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.210 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.210 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.210 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.210 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.210 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.210 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:20.210 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.210 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:08:20.210 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:20.210 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:20.210 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:20.210 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:20.210 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:20.210 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:20.210 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:20.210 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:20.210 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:20.210 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:20.210 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:20.210 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:20.210 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:20.210 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:20.210 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:20.210 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:20.210 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.210 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.210 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.210 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:20.210 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:20.210 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:08:20.210 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:25.491 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:25.491 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:25.491 Found net devices under 0000:86:00.0: cvl_0_0 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:25.491 Found net devices under 0000:86:00.1: cvl_0_1 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:25.491 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:25.492 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:25.492 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:25.492 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:25.492 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:25.492 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:25.492 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:25.492 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:25.492 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:25.492 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:25.492 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:25.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:25.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:08:25.492 00:08:25.492 --- 10.0.0.2 ping statistics --- 00:08:25.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.492 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:08:25.492 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:25.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:25.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:08:25.752 00:08:25.752 --- 10.0.0.1 ping statistics --- 00:08:25.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.752 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:08:25.752 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:25.752 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:08:25.752 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:25.752 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:25.752 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:25.752 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:25.752 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:25.752 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:25.752 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:25.752 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:25.752 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:25.752 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:25.752 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:25.752 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1294129 00:08:25.752 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1294129 00:08:25.752 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 1294129 ']' 00:08:25.752 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.752 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:25.752 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.752 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:25.752 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:25.752 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:25.752 [2024-07-26 10:56:45.063926] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:25.752 [2024-07-26 10:56:45.063971] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.753 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.753 [2024-07-26 10:56:45.120974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:25.753 [2024-07-26 10:56:45.201822] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:25.753 [2024-07-26 10:56:45.201858] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:25.753 [2024-07-26 10:56:45.201865] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:25.753 [2024-07-26 10:56:45.201871] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:25.753 [2024-07-26 10:56:45.201876] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:25.753 [2024-07-26 10:56:45.201909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:25.753 [2024-07-26 10:56:45.201995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:25.753 [2024-07-26 10:56:45.201997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.692 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:26.692 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:08:26.692 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:26.692 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:26.692 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:26.692 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:26.692 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:08:26.692 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.692 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:26.692 [2024-07-26 10:56:45.906543] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:26.692 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.692 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:26.692 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.692 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:26.692 Malloc0 00:08:26.692 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.692 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:26.692 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.692 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:26.692 Delay0 00:08:26.692 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.692 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:26.692 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.692 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:26.692 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.692 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:26.692 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.692 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:26.692 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.692 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:26.692 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.692 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:26.692 [2024-07-26 10:56:45.985030] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:26.692 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.692 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:26.692 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.692 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:26.692 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.692 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:26.692 EAL: No free 2048 kB hugepages reported on node 1 00:08:26.692 [2024-07-26 10:56:46.135356] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:29.232 Initializing NVMe Controllers 00:08:29.232 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:29.232 controller IO queue size 128 less than required 00:08:29.232 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:29.232 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:29.232 Initialization complete. Launching workers. 00:08:29.232 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 42437 00:08:29.232 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 42498, failed to submit 62 00:08:29.232 success 42441, unsuccess 57, failed 0 00:08:29.232 10:56:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:29.232 10:56:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.232 10:56:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:29.232 10:56:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.232 10:56:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:29.232 10:56:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:29.232 10:56:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:29.232 10:56:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:08:29.232 10:56:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:29.232 10:56:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:08:29.232 10:56:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:29.232 10:56:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:29.232 rmmod nvme_tcp 00:08:29.232 rmmod nvme_fabrics 00:08:29.232 rmmod nvme_keyring 00:08:29.232 10:56:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:29.232 10:56:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:08:29.232 10:56:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:08:29.232 10:56:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1294129 ']' 00:08:29.232 10:56:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1294129 00:08:29.232 10:56:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 1294129 ']' 00:08:29.232 10:56:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 1294129 00:08:29.232 10:56:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:08:29.232 10:56:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:29.232 10:56:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1294129 00:08:29.232 10:56:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:29.232 10:56:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:29.232 10:56:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1294129' 00:08:29.232 killing process with pid 1294129 00:08:29.232 10:56:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 1294129 00:08:29.232 10:56:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 1294129 00:08:29.232 10:56:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:29.232 10:56:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:29.232 10:56:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:29.232 10:56:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:29.232 10:56:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:29.232 10:56:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.232 10:56:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:29.232 10:56:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.146 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:31.146 00:08:31.146 real 0m11.239s 00:08:31.146 user 0m13.344s 00:08:31.146 sys 0m5.129s 00:08:31.147 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:31.147 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:31.147 ************************************ 00:08:31.147 END TEST nvmf_abort 00:08:31.147 ************************************ 00:08:31.407 10:56:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:31.407 10:56:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:31.407 10:56:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:31.407 10:56:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:31.407 ************************************ 00:08:31.407 START TEST nvmf_ns_hotplug_stress 00:08:31.407 ************************************ 00:08:31.407 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:31.407 * Looking for test storage... 00:08:31.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:31.407 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:31.407 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:31.407 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:08:31.408 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:37.989 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:37.989 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:37.989 Found net devices under 0000:86:00.0: cvl_0_0 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:37.989 Found net devices under 0000:86:00.1: cvl_0_1 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:37.989 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:37.990 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:37.990 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:37.990 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:37.990 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:37.990 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:37.990 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:37.990 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:37.990 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:37.990 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:37.990 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:37.990 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:37.990 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:08:37.990 00:08:37.990 --- 10.0.0.2 ping statistics --- 00:08:37.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.990 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:08:37.990 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:37.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:37.990 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:08:37.990 00:08:37.990 --- 10.0.0.1 ping statistics --- 00:08:37.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.990 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:08:37.990 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:37.990 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:08:37.990 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:37.990 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:37.990 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:37.990 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:37.990 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:37.990 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:37.990 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:37.990 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:37.990 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:37.990 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:37.990 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:37.990 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1298147 00:08:37.990 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1298147 00:08:37.990 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:37.990 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 1298147 ']' 00:08:37.990 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.990 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:37.990 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.990 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:37.990 10:56:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:37.990 [2024-07-26 10:56:56.604719] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:37.990 [2024-07-26 10:56:56.604762] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:37.990 EAL: No free 2048 kB hugepages reported on node 1 00:08:37.990 [2024-07-26 10:56:56.664207] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:37.990 [2024-07-26 10:56:56.740720] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:37.990 [2024-07-26 10:56:56.740756] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:37.990 [2024-07-26 10:56:56.740763] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:37.990 [2024-07-26 10:56:56.740769] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:37.990 [2024-07-26 10:56:56.740774] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:37.990 [2024-07-26 10:56:56.740877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:37.990 [2024-07-26 10:56:56.740981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:37.990 [2024-07-26 10:56:56.740982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:37.990 10:56:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:37.990 10:56:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:08:37.990 10:56:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:37.990 10:56:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:37.990 10:56:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:37.990 10:56:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:37.990 10:56:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:37.990 10:56:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:38.283 [2024-07-26 10:56:57.614071] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:38.283 10:56:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:38.542 10:56:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:38.542 [2024-07-26 10:56:57.996312] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:38.542 10:56:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:38.801 10:56:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:39.060 Malloc0 00:08:39.060 10:56:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:39.060 Delay0 00:08:39.320 10:56:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:39.320 10:56:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:39.580 NULL1 00:08:39.580 10:56:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:39.840 10:56:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1298644 00:08:39.840 10:56:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:39.840 10:56:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1298644 00:08:39.840 10:56:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.840 EAL: No free 2048 kB hugepages reported on node 1 00:08:39.840 Read completed with error (sct=0, sc=11) 00:08:39.840 10:56:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:39.840 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:40.100 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:40.100 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:40.100 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:40.100 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:40.100 10:56:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:40.100 10:56:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:40.359 true 00:08:40.359 10:56:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1298644 00:08:40.359 10:56:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:41.296 10:57:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:41.296 10:57:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:41.296 10:57:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:41.556 true 00:08:41.556 10:57:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1298644 00:08:41.556 10:57:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:41.815 10:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:41.815 10:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:41.815 10:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:42.074 true 00:08:42.074 10:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1298644 00:08:42.074 10:57:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:43.456 10:57:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:43.456 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.456 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.456 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.456 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.456 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.456 10:57:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:43.456 10:57:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:43.715 true 00:08:43.715 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1298644 00:08:43.715 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:44.655 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:44.655 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:44.655 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:44.915 true 00:08:44.915 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1298644 00:08:44.915 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:45.175 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:45.175 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:45.175 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:45.434 true 00:08:45.434 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1298644 00:08:45.434 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.816 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.816 10:57:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:46.816 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.816 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.816 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.816 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.817 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.817 10:57:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:46.817 10:57:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:46.817 true 00:08:47.076 10:57:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1298644 00:08:47.076 10:57:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.016 10:57:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:48.016 10:57:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:48.016 10:57:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:48.016 true 00:08:48.016 10:57:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1298644 00:08:48.016 10:57:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.276 10:57:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:48.536 10:57:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:48.536 10:57:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:48.536 true 00:08:48.796 10:57:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1298644 00:08:48.796 10:57:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.736 10:57:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:49.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.996 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.996 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.996 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:49.996 10:57:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:49.996 10:57:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:50.256 true 00:08:50.256 10:57:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1298644 00:08:50.256 10:57:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:51.195 10:57:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:51.195 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.195 10:57:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:51.195 10:57:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:51.455 true 00:08:51.455 10:57:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1298644 00:08:51.455 10:57:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:51.715 10:57:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:51.715 10:57:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:51.715 10:57:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:51.976 true 00:08:51.976 10:57:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1298644 00:08:51.976 10:57:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:53.359 10:57:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:53.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:53.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:53.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:53.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:53.359 10:57:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:53.359 10:57:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:53.691 true 00:08:53.691 10:57:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1298644 00:08:53.691 10:57:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.262 10:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:54.522 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.522 10:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:54.522 10:57:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:54.782 true 00:08:54.782 10:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1298644 00:08:54.782 10:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.042 10:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:55.042 10:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:55.042 10:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:55.302 true 00:08:55.302 10:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1298644 00:08:55.302 10:57:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.684 10:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:56.684 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:56.684 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:56.684 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:56.684 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:56.684 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:56.684 10:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:56.684 10:57:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:56.684 true 00:08:56.684 10:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1298644 00:08:56.684 10:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.623 10:57:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:57.883 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:57.883 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:57.883 true 00:08:57.883 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1298644 00:08:57.883 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.143 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:58.403 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:58.403 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:58.403 true 00:08:58.403 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1298644 00:08:58.403 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.785 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:59.785 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.785 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.785 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.785 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.785 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:59.785 10:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:59.785 10:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:00.045 true 00:09:00.045 10:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1298644 00:09:00.045 10:57:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.985 10:57:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:00.985 10:57:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:00.985 10:57:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:01.246 true 00:09:01.246 10:57:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1298644 00:09:01.246 10:57:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.506 10:57:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:01.506 10:57:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:01.506 10:57:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:01.766 true 00:09:01.766 10:57:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1298644 00:09:01.766 10:57:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:03.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.148 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:03.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.148 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:03.148 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:03.408 true 00:09:03.408 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1298644 00:09:03.408 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:04.350 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:04.350 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:04.350 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:04.610 true 00:09:04.610 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1298644 00:09:04.610 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:04.610 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:04.870 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:04.870 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:05.131 true 00:09:05.131 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1298644 00:09:05.131 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:06.070 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:06.070 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.331 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:06.331 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:06.591 true 00:09:06.591 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1298644 00:09:06.591 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:07.531 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.531 10:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:07.531 10:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:07.531 10:57:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:07.791 true 00:09:07.791 10:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1298644 00:09:07.791 10:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.052 10:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:08.052 10:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:08.052 10:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:08.312 true 00:09:08.312 10:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1298644 00:09:08.312 10:57:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:09.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.736 10:57:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:09.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.736 10:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:09.736 10:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:10.021 true 00:09:10.021 10:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1298644 00:09:10.021 10:57:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:10.590 10:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:10.850 Initializing NVMe Controllers 00:09:10.850 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:10.850 Controller IO queue size 128, less than required. 00:09:10.850 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:10.850 Controller IO queue size 128, less than required. 00:09:10.850 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:10.850 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:10.850 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:10.850 Initialization complete. Launching workers. 00:09:10.850 ======================================================== 00:09:10.850 Latency(us) 00:09:10.850 Device Information : IOPS MiB/s Average min max 00:09:10.850 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1944.16 0.95 47386.42 2283.46 1069482.63 00:09:10.850 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17647.15 8.62 7253.23 2366.83 307018.55 00:09:10.850 ======================================================== 00:09:10.850 Total : 19591.31 9.57 11235.88 2283.46 1069482.63 00:09:10.850 00:09:10.850 10:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:10.850 10:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:11.110 true 00:09:11.110 10:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1298644 00:09:11.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1298644) - No such process 00:09:11.110 10:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1298644 00:09:11.110 10:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.110 10:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:11.370 10:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:11.370 10:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:11.370 10:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:11.370 10:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:11.370 10:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:11.629 null0 00:09:11.629 10:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:11.629 10:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:11.629 10:57:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:11.629 null1 00:09:11.889 10:57:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:11.889 10:57:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:11.889 10:57:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:11.889 null2 00:09:11.889 10:57:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:11.889 10:57:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:11.889 10:57:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:12.149 null3 00:09:12.149 10:57:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:12.149 10:57:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:12.149 10:57:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:12.408 null4 00:09:12.408 10:57:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:12.408 10:57:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:12.408 10:57:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:09:12.408 null5 00:09:12.408 10:57:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:12.408 10:57:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:12.408 10:57:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:09:12.666 null6 00:09:12.666 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:12.666 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:12.666 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:09:12.926 null7 00:09:12.926 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:12.926 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:12.926 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:09:12.926 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:12.926 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:09:12.926 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:12.926 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:09:12.926 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:12.926 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.926 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:12.926 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:12.926 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:12.926 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:12.926 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:12.926 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:09:12.926 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:12.926 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:09:12.926 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:12.926 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.926 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:12.926 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:12.926 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:12.926 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:09:12.926 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:12.926 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:09:12.926 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:12.926 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.926 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:12.926 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:12.926 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:12.926 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:09:12.926 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:12.926 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:09:12.926 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:12.926 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.926 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:12.926 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:12.926 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:12.926 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:09:12.926 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:12.927 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:09:12.927 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:12.927 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.927 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:12.927 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:12.927 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:12.927 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:09:12.927 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:12.927 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:09:12.927 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:12.927 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.927 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:12.927 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:12.927 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:12.927 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:09:12.927 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:12.927 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:09:12.927 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:12.927 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.927 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:12.927 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:12.927 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:12.927 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:12.927 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:09:12.927 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1304762 1304765 1304766 1304768 1304771 1304776 1304778 1304780 00:09:12.927 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:09:12.927 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:12.927 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:12.927 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:12.927 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:13.186 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:13.186 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:13.186 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:13.186 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:13.186 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:13.186 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:13.186 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:13.186 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.186 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.186 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:13.187 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.187 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.187 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:13.187 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.187 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.187 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:13.187 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.187 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.187 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:13.187 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.187 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.187 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.187 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.187 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:13.187 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:13.187 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.187 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.187 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:13.187 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.187 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.187 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:13.563 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:13.563 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:13.563 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:13.563 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:13.563 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:13.563 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:13.563 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:13.563 10:57:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:13.563 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.563 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.563 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.563 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.563 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:13.563 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:13.563 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.563 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.563 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:13.563 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.563 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.563 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:13.563 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.563 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.563 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:13.563 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.563 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.563 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:13.563 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.563 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.563 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:13.563 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:13.563 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:13.563 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:13.823 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:13.823 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:13.823 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:13.823 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:13.823 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:13.823 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:13.823 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:13.823 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:14.082 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.082 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.082 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:14.082 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.082 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.082 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:14.082 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.082 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.082 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:14.082 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.082 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.082 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:14.082 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.082 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.082 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:14.082 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.082 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.082 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:14.082 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.083 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.083 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:14.083 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.083 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.083 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:14.342 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:14.342 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.342 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:14.342 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:14.342 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:14.342 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:14.342 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:14.342 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:14.342 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.342 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.342 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:14.342 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.342 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.342 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:14.342 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.342 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.342 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:14.342 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.342 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.342 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:14.342 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.342 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.342 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:14.342 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.342 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.342 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:14.342 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.342 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.342 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:14.342 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.342 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.342 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:14.602 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:14.602 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.602 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:14.602 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:14.602 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:14.602 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:14.602 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:14.602 10:57:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:14.862 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.862 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.862 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:14.862 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.862 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.862 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:14.862 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.862 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.862 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:14.862 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.862 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.862 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:14.862 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.862 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.862 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:14.862 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.862 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.862 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:14.862 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.862 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.863 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:14.863 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:14.863 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:14.863 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:14.863 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:14.863 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:14.863 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:14.863 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.122 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:15.122 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:15.122 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:15.122 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:15.122 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.122 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.122 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:15.122 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.122 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.122 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:15.122 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.122 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.122 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:15.122 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.122 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.122 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:15.122 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.122 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.122 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:15.122 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.122 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.122 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.122 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:15.122 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.122 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:15.122 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.122 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.122 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:15.381 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:15.381 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:15.381 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:15.381 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.381 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:15.381 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:15.381 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:15.381 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:15.641 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.641 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.641 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:15.641 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.641 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.641 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:15.641 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.641 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.641 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:15.641 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.641 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.641 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:15.641 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.641 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.641 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:15.641 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.641 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.641 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:15.641 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.641 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.641 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:15.642 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.642 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.642 10:57:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:15.642 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:15.642 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:15.642 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.642 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:15.642 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:15.642 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:15.642 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:15.642 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:15.902 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.902 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.902 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:15.902 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.902 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.902 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:15.902 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.902 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.902 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:15.902 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.902 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.902 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:15.902 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.902 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.902 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:15.902 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.902 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.902 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:15.902 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.902 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.902 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:15.902 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.902 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.902 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:16.162 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:16.162 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.162 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:16.162 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:16.162 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:16.162 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:16.162 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:16.162 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:16.422 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.422 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.422 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:16.422 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.422 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.422 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:16.422 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.422 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.422 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:16.422 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.422 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.422 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:16.422 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.422 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.422 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:16.422 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.422 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.422 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:16.422 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.422 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.422 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:16.422 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.422 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.422 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:16.422 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:16.422 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:16.422 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.422 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:16.423 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:16.423 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:16.423 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:16.423 10:57:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:16.683 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.683 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.683 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.683 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.683 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.683 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.683 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.683 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.683 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.683 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.683 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.683 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.683 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.683 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.683 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.683 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.683 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:16.683 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:16.683 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:16.683 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:09:16.683 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:16.683 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:09:16.683 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:16.683 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:16.683 rmmod nvme_tcp 00:09:16.683 rmmod nvme_fabrics 00:09:16.683 rmmod nvme_keyring 00:09:16.683 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:16.683 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:09:16.683 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:09:16.683 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1298147 ']' 00:09:16.683 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1298147 00:09:16.683 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 1298147 ']' 00:09:16.683 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 1298147 00:09:16.683 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:09:16.683 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:16.683 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1298147 00:09:16.944 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:16.944 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:16.944 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1298147' 00:09:16.944 killing process with pid 1298147 00:09:16.944 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 1298147 00:09:16.944 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 1298147 00:09:16.944 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:16.944 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:16.944 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:16.944 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:16.944 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:16.944 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.944 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:16.944 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.485 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:19.485 00:09:19.485 real 0m47.743s 00:09:19.485 user 3m12.118s 00:09:19.485 sys 0m15.337s 00:09:19.485 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:19.485 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:19.485 ************************************ 00:09:19.485 END TEST nvmf_ns_hotplug_stress 00:09:19.485 ************************************ 00:09:19.485 10:57:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:19.485 10:57:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:19.485 10:57:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:19.485 10:57:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:19.485 ************************************ 00:09:19.485 START TEST nvmf_delete_subsystem 00:09:19.485 ************************************ 00:09:19.485 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:19.485 * Looking for test storage... 00:09:19.485 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:19.485 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:19.485 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:19.485 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:19.485 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:19.485 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:19.485 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:19.485 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:19.485 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:19.485 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:19.485 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:19.485 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:19.485 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:19.485 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:19.485 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:19.485 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:19.485 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:19.485 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:19.485 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:19.485 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:19.485 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:19.485 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:19.485 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:19.485 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.485 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.486 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.486 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:19.486 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.486 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:09:19.486 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:19.486 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:19.486 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:19.486 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:19.486 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:19.486 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:19.486 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:19.486 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:19.486 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:19.486 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:19.486 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:19.486 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:19.486 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:19.486 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:19.486 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.486 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.486 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.486 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:19.486 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:19.486 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:09:19.486 10:57:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:24.774 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:24.774 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:09:24.774 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:24.774 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:24.774 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:24.774 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:24.774 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:24.774 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:09:24.774 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:24.774 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:09:24.774 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:09:24.774 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:09:24.774 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:09:24.774 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:09:24.774 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:09:24.774 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:24.774 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:24.774 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:24.774 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:24.774 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:24.774 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:24.774 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:24.774 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:24.775 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:24.775 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:24.775 Found net devices under 0000:86:00.0: cvl_0_0 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:24.775 Found net devices under 0000:86:00.1: cvl_0_1 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:24.775 10:57:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:24.775 10:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:24.775 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:24.775 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:09:24.775 00:09:24.775 --- 10.0.0.2 ping statistics --- 00:09:24.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.775 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:09:24.775 10:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:24.775 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:24.775 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:09:24.775 00:09:24.775 --- 10.0.0.1 ping statistics --- 00:09:24.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.775 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:09:24.775 10:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:24.775 10:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:09:24.775 10:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:24.775 10:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:24.775 10:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:24.775 10:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:24.775 10:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:24.775 10:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:24.775 10:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:24.775 10:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:24.775 10:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:24.775 10:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:24.776 10:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:24.776 10:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1309136 00:09:24.776 10:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1309136 00:09:24.776 10:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:24.776 10:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 1309136 ']' 00:09:24.776 10:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.776 10:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:24.776 10:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.776 10:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:24.776 10:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:24.776 [2024-07-26 10:57:44.133871] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:24.776 [2024-07-26 10:57:44.133915] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.776 EAL: No free 2048 kB hugepages reported on node 1 00:09:24.776 [2024-07-26 10:57:44.192823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:24.776 [2024-07-26 10:57:44.265303] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:24.776 [2024-07-26 10:57:44.265345] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:24.776 [2024-07-26 10:57:44.265353] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:24.776 [2024-07-26 10:57:44.265359] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:24.776 [2024-07-26 10:57:44.265364] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:24.776 [2024-07-26 10:57:44.265409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.776 [2024-07-26 10:57:44.265411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.715 10:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:25.715 10:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:09:25.715 10:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:25.715 10:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:25.715 10:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:25.715 10:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:25.715 10:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:25.715 10:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.715 10:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:25.715 [2024-07-26 10:57:44.981778] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:25.715 10:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.715 10:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:25.715 10:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.715 10:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:25.715 10:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.715 10:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:25.715 10:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.715 10:57:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:25.715 [2024-07-26 10:57:44.997959] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:25.715 10:57:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.715 10:57:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:25.715 10:57:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.715 10:57:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:25.715 NULL1 00:09:25.715 10:57:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.715 10:57:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:25.715 10:57:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.715 10:57:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:25.715 Delay0 00:09:25.715 10:57:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.715 10:57:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:25.715 10:57:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.715 10:57:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:25.715 10:57:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.715 10:57:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1309255 00:09:25.715 10:57:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:25.715 10:57:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:25.715 EAL: No free 2048 kB hugepages reported on node 1 00:09:25.715 [2024-07-26 10:57:45.072658] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:27.625 10:57:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:27.625 10:57:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.625 10:57:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:27.885 Read completed with error (sct=0, sc=8) 00:09:27.885 Write completed with error (sct=0, sc=8) 00:09:27.885 Write completed with error (sct=0, sc=8) 00:09:27.885 Read completed with error (sct=0, sc=8) 00:09:27.885 starting I/O failed: -6 00:09:27.885 Read completed with error (sct=0, sc=8) 00:09:27.885 Write completed with error (sct=0, sc=8) 00:09:27.885 Read completed with error (sct=0, sc=8) 00:09:27.885 Read completed with error (sct=0, sc=8) 00:09:27.885 starting I/O failed: -6 00:09:27.885 Read completed with error (sct=0, sc=8) 00:09:27.885 Write completed with error (sct=0, sc=8) 00:09:27.885 Read completed with error (sct=0, sc=8) 00:09:27.885 Read completed with error (sct=0, sc=8) 00:09:27.885 starting I/O failed: -6 00:09:27.885 Write completed with error (sct=0, sc=8) 00:09:27.885 Read completed with error (sct=0, sc=8) 00:09:27.885 Write completed with error (sct=0, sc=8) 00:09:27.885 Read completed with error (sct=0, sc=8) 00:09:27.885 starting I/O failed: -6 00:09:27.885 Read completed with error (sct=0, sc=8) 00:09:27.885 Read completed with error (sct=0, sc=8) 00:09:27.885 Read completed with error (sct=0, sc=8) 00:09:27.885 Read completed with error (sct=0, sc=8) 00:09:27.885 starting I/O failed: -6 00:09:27.885 Read completed with error (sct=0, sc=8) 00:09:27.885 Write completed with error (sct=0, sc=8) 00:09:27.885 Read completed with error (sct=0, sc=8) 00:09:27.885 Read completed with error (sct=0, sc=8) 00:09:27.885 starting I/O failed: -6 00:09:27.885 Read completed with error (sct=0, sc=8) 00:09:27.885 Write completed with error (sct=0, sc=8) 00:09:27.885 Write completed with error (sct=0, sc=8) 00:09:27.885 Read completed with error (sct=0, sc=8) 00:09:27.885 starting I/O failed: -6 00:09:27.885 Read completed with error (sct=0, sc=8) 00:09:27.885 Read completed with error (sct=0, sc=8) 00:09:27.885 Write completed with error (sct=0, sc=8) 00:09:27.885 Read completed with error (sct=0, sc=8) 00:09:27.885 starting I/O failed: -6 00:09:27.885 Read completed with error (sct=0, sc=8) 00:09:27.885 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 starting I/O failed: -6 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 [2024-07-26 10:57:47.171614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f37b4000c00 is same with the state(5) to be set 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Write completed with error (sct=0, sc=8) 00:09:27.886 Write completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Write completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Write completed with error (sct=0, sc=8) 00:09:27.886 Write completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Write completed with error (sct=0, sc=8) 00:09:27.886 Write completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Write completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Write completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Write completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Write completed with error (sct=0, sc=8) 00:09:27.886 Write completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 starting I/O failed: -6 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 starting I/O failed: -6 00:09:27.886 Write completed with error (sct=0, sc=8) 00:09:27.886 Write completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 starting I/O failed: -6 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Write completed with error (sct=0, sc=8) 00:09:27.886 starting I/O failed: -6 00:09:27.886 Write completed with error (sct=0, sc=8) 00:09:27.886 Write completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Write completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Write completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 starting I/O failed: -6 00:09:27.886 Write completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Write completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 starting I/O failed: -6 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Write completed with error (sct=0, sc=8) 00:09:27.886 Write completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 starting I/O failed: -6 00:09:27.886 Write completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Write completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Write completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Write completed with error (sct=0, sc=8) 00:09:27.886 Write completed with error (sct=0, sc=8) 00:09:27.886 starting I/O failed: -6 00:09:27.886 Write completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Write completed with error (sct=0, sc=8) 00:09:27.886 Write completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 starting I/O failed: -6 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Write completed with error (sct=0, sc=8) 00:09:27.886 Write completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Write completed with error (sct=0, sc=8) 00:09:27.886 starting I/O failed: -6 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Write completed with error (sct=0, sc=8) 00:09:27.886 Write completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 starting I/O failed: -6 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Write completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 starting I/O failed: -6 00:09:27.886 Write completed with error (sct=0, sc=8) 00:09:27.886 Write completed with error (sct=0, sc=8) 00:09:27.886 Write completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 starting I/O failed: -6 00:09:27.886 Write completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 Read completed with error (sct=0, sc=8) 00:09:27.886 starting I/O failed: -6 00:09:28.866 [2024-07-26 10:57:48.131089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1494ac0 is same with the state(5) to be set 00:09:28.866 Read completed with error (sct=0, sc=8) 00:09:28.866 Read completed with error (sct=0, sc=8) 00:09:28.866 Write completed with error (sct=0, sc=8) 00:09:28.866 Read completed with error (sct=0, sc=8) 00:09:28.866 Write completed with error (sct=0, sc=8) 00:09:28.866 Read completed with error (sct=0, sc=8) 00:09:28.866 Write completed with error (sct=0, sc=8) 00:09:28.866 Write completed with error (sct=0, sc=8) 00:09:28.866 Write completed with error (sct=0, sc=8) 00:09:28.866 Read completed with error (sct=0, sc=8) 00:09:28.866 Write completed with error (sct=0, sc=8) 00:09:28.866 Write completed with error (sct=0, sc=8) 00:09:28.866 Read completed with error (sct=0, sc=8) 00:09:28.866 Read completed with error (sct=0, sc=8) 00:09:28.866 Read completed with error (sct=0, sc=8) 00:09:28.866 Read completed with error (sct=0, sc=8) 00:09:28.866 Read completed with error (sct=0, sc=8) 00:09:28.866 Read completed with error (sct=0, sc=8) 00:09:28.866 Read completed with error (sct=0, sc=8) 00:09:28.866 Read completed with error (sct=0, sc=8) 00:09:28.866 Read completed with error (sct=0, sc=8) 00:09:28.866 Read completed with error (sct=0, sc=8) 00:09:28.866 Write completed with error (sct=0, sc=8) 00:09:28.866 Read completed with error (sct=0, sc=8) 00:09:28.866 Read completed with error (sct=0, sc=8) 00:09:28.866 Read completed with error (sct=0, sc=8) 00:09:28.866 Write completed with error (sct=0, sc=8) 00:09:28.866 Read completed with error (sct=0, sc=8) 00:09:28.866 Read completed with error (sct=0, sc=8) 00:09:28.866 Write completed with error (sct=0, sc=8) 00:09:28.866 Read completed with error (sct=0, sc=8) 00:09:28.866 Read completed with error (sct=0, sc=8) 00:09:28.866 Read completed with error (sct=0, sc=8) 00:09:28.866 Read completed with error (sct=0, sc=8) 00:09:28.866 Read completed with error (sct=0, sc=8) 00:09:28.866 Read completed with error (sct=0, sc=8) 00:09:28.866 [2024-07-26 10:57:48.173226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14933e0 is same with the state(5) to be set 00:09:28.866 Read completed with error (sct=0, sc=8) 00:09:28.866 Read completed with error (sct=0, sc=8) 00:09:28.866 Read completed with error (sct=0, sc=8) 00:09:28.866 Read completed with error (sct=0, sc=8) 00:09:28.866 Read completed with error (sct=0, sc=8) 00:09:28.866 Read completed with error (sct=0, sc=8) 00:09:28.866 Write completed with error (sct=0, sc=8) 00:09:28.866 Read completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Write completed with error (sct=0, sc=8) 00:09:28.867 Write completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 [2024-07-26 10:57:48.173528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f37b400d330 is same with the state(5) to be set 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Write completed with error (sct=0, sc=8) 00:09:28.867 Write completed with error (sct=0, sc=8) 00:09:28.867 Write completed with error (sct=0, sc=8) 00:09:28.867 Write completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Write completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Write completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Write completed with error (sct=0, sc=8) 00:09:28.867 Write completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Write completed with error (sct=0, sc=8) 00:09:28.867 Write completed with error (sct=0, sc=8) 00:09:28.867 [2024-07-26 10:57:48.173825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1493a40 is same with the state(5) to be set 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Write completed with error (sct=0, sc=8) 00:09:28.867 Write completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Write completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Write completed with error (sct=0, sc=8) 00:09:28.867 Write completed with error (sct=0, sc=8) 00:09:28.867 Write completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Write completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Write completed with error (sct=0, sc=8) 00:09:28.867 Write completed with error (sct=0, sc=8) 00:09:28.867 Write completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Write completed with error (sct=0, sc=8) 00:09:28.867 Write completed with error (sct=0, sc=8) 00:09:28.867 Write completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Write completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 Write completed with error (sct=0, sc=8) 00:09:28.867 Read completed with error (sct=0, sc=8) 00:09:28.867 [2024-07-26 10:57:48.174032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1493000 is same with the state(5) to be set 00:09:28.867 Initializing NVMe Controllers 00:09:28.867 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:28.867 Controller IO queue size 128, less than required. 00:09:28.867 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:28.867 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:28.867 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:28.867 Initialization complete. Launching workers. 00:09:28.867 ======================================================== 00:09:28.867 Latency(us) 00:09:28.867 Device Information : IOPS MiB/s Average min max 00:09:28.867 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 190.62 0.09 949796.15 576.99 1011710.52 00:09:28.867 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 149.42 0.07 899574.96 275.78 1011807.20 00:09:28.867 ======================================================== 00:09:28.867 Total : 340.05 0.17 927728.15 275.78 1011807.20 00:09:28.867 00:09:28.867 [2024-07-26 10:57:48.174579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1494ac0 (9): Bad file descriptor 00:09:28.867 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:28.867 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.867 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:28.867 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1309255 00:09:28.867 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:29.551 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:29.551 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1309255 00:09:29.551 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1309255) - No such process 00:09:29.551 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1309255 00:09:29.551 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:09:29.551 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1309255 00:09:29.551 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:09:29.551 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:29.551 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:09:29.551 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:29.551 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1309255 00:09:29.551 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:09:29.551 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:29.551 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:29.551 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:29.551 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:29.551 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.551 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:29.551 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.551 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:29.551 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.551 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:29.551 [2024-07-26 10:57:48.707220] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:29.551 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.552 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:29.552 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.552 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:29.552 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.552 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1309861 00:09:29.552 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:29.552 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:29.552 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1309861 00:09:29.552 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:29.552 EAL: No free 2048 kB hugepages reported on node 1 00:09:29.552 [2024-07-26 10:57:48.764315] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:29.811 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:29.811 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1309861 00:09:29.811 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:30.381 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:30.381 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1309861 00:09:30.381 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:30.953 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:30.953 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1309861 00:09:30.953 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:31.522 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:31.522 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1309861 00:09:31.522 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:31.780 10:57:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:31.780 10:57:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1309861 00:09:31.780 10:57:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:32.350 10:57:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:32.350 10:57:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1309861 00:09:32.350 10:57:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:32.609 Initializing NVMe Controllers 00:09:32.609 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:32.609 Controller IO queue size 128, less than required. 00:09:32.609 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:32.609 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:32.609 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:32.609 Initialization complete. Launching workers. 00:09:32.609 ======================================================== 00:09:32.609 Latency(us) 00:09:32.609 Device Information : IOPS MiB/s Average min max 00:09:32.609 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004154.66 1000469.28 1011192.31 00:09:32.609 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005111.81 1000492.43 1013525.13 00:09:32.609 ======================================================== 00:09:32.609 Total : 256.00 0.12 1004633.24 1000469.28 1013525.13 00:09:32.609 00:09:32.870 10:57:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:32.870 10:57:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1309861 00:09:32.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1309861) - No such process 00:09:32.870 10:57:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1309861 00:09:32.870 10:57:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:32.870 10:57:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:32.870 10:57:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:32.870 10:57:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:09:32.870 10:57:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:32.870 10:57:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:09:32.870 10:57:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:32.870 10:57:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:32.870 rmmod nvme_tcp 00:09:32.870 rmmod nvme_fabrics 00:09:32.870 rmmod nvme_keyring 00:09:32.870 10:57:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:32.870 10:57:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:09:32.870 10:57:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:09:32.870 10:57:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1309136 ']' 00:09:32.870 10:57:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1309136 00:09:32.870 10:57:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 1309136 ']' 00:09:32.870 10:57:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 1309136 00:09:32.870 10:57:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:09:32.870 10:57:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:32.870 10:57:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1309136 00:09:32.870 10:57:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:32.870 10:57:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:32.870 10:57:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1309136' 00:09:32.870 killing process with pid 1309136 00:09:32.870 10:57:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 1309136 00:09:32.870 10:57:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 1309136 00:09:33.129 10:57:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:33.129 10:57:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:33.129 10:57:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:33.129 10:57:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:33.129 10:57:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:33.129 10:57:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.129 10:57:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:33.129 10:57:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:35.670 00:09:35.670 real 0m16.083s 00:09:35.670 user 0m30.117s 00:09:35.670 sys 0m4.983s 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:35.670 ************************************ 00:09:35.670 END TEST nvmf_delete_subsystem 00:09:35.670 ************************************ 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:35.670 ************************************ 00:09:35.670 START TEST nvmf_host_management 00:09:35.670 ************************************ 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:35.670 * Looking for test storage... 00:09:35.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:09:35.670 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:39.867 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:39.867 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:39.867 Found net devices under 0000:86:00.0: cvl_0_0 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:39.867 Found net devices under 0000:86:00.1: cvl_0_1 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:39.867 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:39.872 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:39.872 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:39.873 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:39.873 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:39.873 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:39.873 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:39.873 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:39.873 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:40.134 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:40.134 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:40.134 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:40.134 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:40.134 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:40.134 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:40.134 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:40.134 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:40.134 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:09:40.134 00:09:40.134 --- 10.0.0.2 ping statistics --- 00:09:40.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.134 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:09:40.134 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:40.134 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:40.134 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.421 ms 00:09:40.134 00:09:40.134 --- 10.0.0.1 ping statistics --- 00:09:40.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.134 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:09:40.134 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:40.134 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:09:40.134 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:40.134 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:40.134 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:40.134 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:40.134 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:40.134 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:40.134 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:40.134 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:40.134 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:40.134 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:40.134 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:40.134 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:40.134 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:40.135 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1313848 00:09:40.135 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1313848 00:09:40.135 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:40.135 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1313848 ']' 00:09:40.135 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.135 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:40.135 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.135 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:40.135 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:40.135 [2024-07-26 10:57:59.611571] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:40.135 [2024-07-26 10:57:59.611614] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:40.395 EAL: No free 2048 kB hugepages reported on node 1 00:09:40.395 [2024-07-26 10:57:59.670517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:40.395 [2024-07-26 10:57:59.751365] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:40.395 [2024-07-26 10:57:59.751402] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:40.395 [2024-07-26 10:57:59.751410] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:40.395 [2024-07-26 10:57:59.751416] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:40.396 [2024-07-26 10:57:59.751420] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:40.396 [2024-07-26 10:57:59.751534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:40.396 [2024-07-26 10:57:59.751628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:40.396 [2024-07-26 10:57:59.751736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.396 [2024-07-26 10:57:59.751737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:40.965 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:40.965 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:09:40.965 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:40.965 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:40.965 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:41.224 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:41.224 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:41.224 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.224 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:41.224 [2024-07-26 10:58:00.488458] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:41.224 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.225 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:41.225 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:41.225 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:41.225 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:41.225 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:41.225 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:41.225 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.225 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:41.225 Malloc0 00:09:41.225 [2024-07-26 10:58:00.548118] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:41.225 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.225 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:41.225 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:41.225 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:41.225 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1314116 00:09:41.225 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1314116 /var/tmp/bdevperf.sock 00:09:41.225 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1314116 ']' 00:09:41.225 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:41.225 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:41.225 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:41.225 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:41.225 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:41.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:41.225 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:09:41.225 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:41.225 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:09:41.225 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:41.225 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:41.225 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:41.225 { 00:09:41.225 "params": { 00:09:41.225 "name": "Nvme$subsystem", 00:09:41.225 "trtype": "$TEST_TRANSPORT", 00:09:41.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:41.225 "adrfam": "ipv4", 00:09:41.225 "trsvcid": "$NVMF_PORT", 00:09:41.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:41.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:41.225 "hdgst": ${hdgst:-false}, 00:09:41.225 "ddgst": ${ddgst:-false} 00:09:41.225 }, 00:09:41.225 "method": "bdev_nvme_attach_controller" 00:09:41.225 } 00:09:41.225 EOF 00:09:41.225 )") 00:09:41.225 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:09:41.225 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:09:41.225 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:09:41.225 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:41.225 "params": { 00:09:41.225 "name": "Nvme0", 00:09:41.225 "trtype": "tcp", 00:09:41.225 "traddr": "10.0.0.2", 00:09:41.225 "adrfam": "ipv4", 00:09:41.225 "trsvcid": "4420", 00:09:41.225 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:41.225 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:41.225 "hdgst": false, 00:09:41.225 "ddgst": false 00:09:41.225 }, 00:09:41.225 "method": "bdev_nvme_attach_controller" 00:09:41.225 }' 00:09:41.225 [2024-07-26 10:58:00.638718] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:41.225 [2024-07-26 10:58:00.638766] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1314116 ] 00:09:41.225 EAL: No free 2048 kB hugepages reported on node 1 00:09:41.225 [2024-07-26 10:58:00.694725] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.484 [2024-07-26 10:58:00.770054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.744 Running I/O for 10 seconds... 00:09:42.004 10:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:42.004 10:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:09:42.004 10:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:42.004 10:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.004 10:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:42.004 10:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.004 10:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:42.004 10:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:42.004 10:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:42.004 10:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:42.004 10:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:42.004 10:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:42.004 10:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:42.004 10:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:42.004 10:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:42.004 10:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:42.004 10:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.004 10:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:42.004 10:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.266 10:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=323 00:09:42.266 10:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 323 -ge 100 ']' 00:09:42.266 10:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:42.266 10:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:42.266 10:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:42.266 10:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:42.266 10:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.266 10:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:42.266 [2024-07-26 10:58:01.527427] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e580 is same with the state(5) to be set 00:09:42.266 [2024-07-26 10:58:01.527470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e580 is same with the state(5) to be set 00:09:42.266 [2024-07-26 10:58:01.527478] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e580 is same with the state(5) to be set 00:09:42.266 [2024-07-26 10:58:01.527484] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e580 is same with the state(5) to be set 00:09:42.266 [2024-07-26 10:58:01.527490] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e580 is same with the state(5) to be set 00:09:42.266 [2024-07-26 10:58:01.527497] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e580 is same with the state(5) to be set 00:09:42.266 [2024-07-26 10:58:01.527503] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e580 is same with the state(5) to be set 00:09:42.266 [2024-07-26 10:58:01.528509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:09:42.266 [2024-07-26 10:58:01.528542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.266 [2024-07-26 10:58:01.528556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:09:42.266 [2024-07-26 10:58:01.528563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.266 [2024-07-26 10:58:01.528571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:09:42.266 [2024-07-26 10:58:01.528578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.266 [2024-07-26 10:58:01.528585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:09:42.266 [2024-07-26 10:58:01.528591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.266 [2024-07-26 10:58:01.528598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x115c980 is same with the state(5) to be set 00:09:42.266 [2024-07-26 10:58:01.529295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.266 [2024-07-26 10:58:01.529314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.266 [2024-07-26 10:58:01.529326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.266 [2024-07-26 10:58:01.529334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.266 [2024-07-26 10:58:01.529342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:49408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.266 [2024-07-26 10:58:01.529349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.266 [2024-07-26 10:58:01.529357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.266 [2024-07-26 10:58:01.529364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.266 [2024-07-26 10:58:01.529372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:49664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.266 [2024-07-26 10:58:01.529378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.266 [2024-07-26 10:58:01.529387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.266 [2024-07-26 10:58:01.529394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.266 [2024-07-26 10:58:01.529402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:49920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.266 [2024-07-26 10:58:01.529408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.266 [2024-07-26 10:58:01.529416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:50048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.266 [2024-07-26 10:58:01.529422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.266 [2024-07-26 10:58:01.529430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.266 [2024-07-26 10:58:01.529437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.266 [2024-07-26 10:58:01.529451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:50304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.266 [2024-07-26 10:58:01.529458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.266 [2024-07-26 10:58:01.529466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:50432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.266 [2024-07-26 10:58:01.529473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.266 [2024-07-26 10:58:01.529481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:50560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.266 [2024-07-26 10:58:01.529487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.266 [2024-07-26 10:58:01.529495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.266 [2024-07-26 10:58:01.529502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.267 [2024-07-26 10:58:01.529510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.267 [2024-07-26 10:58:01.529516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.267 [2024-07-26 10:58:01.529524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:50944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.267 [2024-07-26 10:58:01.529530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.267 [2024-07-26 10:58:01.529538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.267 [2024-07-26 10:58:01.529544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.267 [2024-07-26 10:58:01.529552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:51200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.267 [2024-07-26 10:58:01.529558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.267 [2024-07-26 10:58:01.529566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.267 [2024-07-26 10:58:01.529572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.267 [2024-07-26 10:58:01.529580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:51456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.267 [2024-07-26 10:58:01.529587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.267 [2024-07-26 10:58:01.529596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:51584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.267 [2024-07-26 10:58:01.529603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.267 [2024-07-26 10:58:01.529611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:51712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.267 [2024-07-26 10:58:01.529617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.267 [2024-07-26 10:58:01.529628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:51840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.267 [2024-07-26 10:58:01.529636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.267 [2024-07-26 10:58:01.529645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.267 [2024-07-26 10:58:01.529651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.267 [2024-07-26 10:58:01.529659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:52096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.267 [2024-07-26 10:58:01.529665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.267 [2024-07-26 10:58:01.529673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:52224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.267 [2024-07-26 10:58:01.529680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.267 [2024-07-26 10:58:01.529687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:52352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.267 [2024-07-26 10:58:01.529694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.267 [2024-07-26 10:58:01.529702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:52480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.267 [2024-07-26 10:58:01.529708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.267 [2024-07-26 10:58:01.529716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:52608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.267 [2024-07-26 10:58:01.529722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.267 [2024-07-26 10:58:01.529730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:52736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.267 [2024-07-26 10:58:01.529736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.267 [2024-07-26 10:58:01.529744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:52864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.267 [2024-07-26 10:58:01.529751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.267 [2024-07-26 10:58:01.529759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:52992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.267 [2024-07-26 10:58:01.529766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.267 [2024-07-26 10:58:01.529773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:53120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.267 [2024-07-26 10:58:01.529780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.267 [2024-07-26 10:58:01.529788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:53248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.267 [2024-07-26 10:58:01.529794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.267 [2024-07-26 10:58:01.529802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:53376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.267 [2024-07-26 10:58:01.529808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.267 [2024-07-26 10:58:01.529817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:53504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.267 [2024-07-26 10:58:01.529824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.267 [2024-07-26 10:58:01.529832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:53632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.267 [2024-07-26 10:58:01.529838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.267 [2024-07-26 10:58:01.529846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:53760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.267 [2024-07-26 10:58:01.529853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.267 [2024-07-26 10:58:01.529861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:53888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.267 [2024-07-26 10:58:01.529868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.267 [2024-07-26 10:58:01.529876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:54016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.267 [2024-07-26 10:58:01.529882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.267 [2024-07-26 10:58:01.529891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.267 [2024-07-26 10:58:01.529897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.267 [2024-07-26 10:58:01.529905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.267 [2024-07-26 10:58:01.529912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.267 [2024-07-26 10:58:01.529919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.267 [2024-07-26 10:58:01.529926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.267 [2024-07-26 10:58:01.529934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.267 [2024-07-26 10:58:01.529940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.267 [2024-07-26 10:58:01.529948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.267 [2024-07-26 10:58:01.529954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.267 [2024-07-26 10:58:01.529962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:54784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.267 [2024-07-26 10:58:01.529969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.267 [2024-07-26 10:58:01.529976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.267 [2024-07-26 10:58:01.529983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.267 [2024-07-26 10:58:01.529991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.267 [2024-07-26 10:58:01.529998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.267 [2024-07-26 10:58:01.530006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:55168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.267 [2024-07-26 10:58:01.530012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.267 [2024-07-26 10:58:01.530020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:55296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.267 [2024-07-26 10:58:01.530027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.267 [2024-07-26 10:58:01.530035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.267 [2024-07-26 10:58:01.530047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.267 [2024-07-26 10:58:01.530056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.267 [2024-07-26 10:58:01.530062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.267 [2024-07-26 10:58:01.530070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.268 [2024-07-26 10:58:01.530076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.268 [2024-07-26 10:58:01.530087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.268 [2024-07-26 10:58:01.530094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.268 [2024-07-26 10:58:01.530102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.268 [2024-07-26 10:58:01.530108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.268 [2024-07-26 10:58:01.530116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:56064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.268 [2024-07-26 10:58:01.530123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.268 [2024-07-26 10:58:01.530131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:56192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.268 [2024-07-26 10:58:01.530137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.268 [2024-07-26 10:58:01.530145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:56320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.268 [2024-07-26 10:58:01.530151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.268 [2024-07-26 10:58:01.530159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:56448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.268 [2024-07-26 10:58:01.530166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.268 [2024-07-26 10:58:01.530174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:56576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.268 [2024-07-26 10:58:01.530180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.268 [2024-07-26 10:58:01.530189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:56704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.268 [2024-07-26 10:58:01.530196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.268 [2024-07-26 10:58:01.530204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:56832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.268 [2024-07-26 10:58:01.530211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.268 [2024-07-26 10:58:01.530218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:56960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.268 [2024-07-26 10:58:01.530225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.268 [2024-07-26 10:58:01.530233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.268 [2024-07-26 10:58:01.530239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.268 [2024-07-26 10:58:01.530247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.268 [2024-07-26 10:58:01.530253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.268 [2024-07-26 10:58:01.530311] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x158e660 was disconnected and freed. reset controller. 00:09:42.268 [2024-07-26 10:58:01.531207] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:09:42.268 10:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.268 task offset: 49152 on job bdev=Nvme0n1 fails 00:09:42.268 00:09:42.268 Latency(us) 00:09:42.268 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.268 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:42.268 Job: Nvme0n1 ended in about 0.43 seconds with error 00:09:42.268 Verification LBA range: start 0x0 length 0x400 00:09:42.268 Nvme0n1 : 0.43 903.40 56.46 150.57 0.00 59363.11 1296.47 62914.56 00:09:42.268 =================================================================================================================== 00:09:42.268 Total : 903.40 56.46 150.57 0.00 59363.11 1296.47 62914.56 00:09:42.268 10:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:42.268 [2024-07-26 10:58:01.532793] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:42.268 [2024-07-26 10:58:01.532807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x115c980 (9): Bad file descriptor 00:09:42.268 10:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.268 10:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:42.268 10:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.268 10:58:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:42.268 [2024-07-26 10:58:01.588009] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:43.208 10:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1314116 00:09:43.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1314116) - No such process 00:09:43.208 10:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:43.208 10:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:43.208 10:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:43.208 10:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:43.208 10:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:09:43.208 10:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:09:43.208 10:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:43.208 10:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:43.208 { 00:09:43.208 "params": { 00:09:43.208 "name": "Nvme$subsystem", 00:09:43.208 "trtype": "$TEST_TRANSPORT", 00:09:43.208 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:43.208 "adrfam": "ipv4", 00:09:43.208 "trsvcid": "$NVMF_PORT", 00:09:43.208 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:43.208 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:43.208 "hdgst": ${hdgst:-false}, 00:09:43.208 "ddgst": ${ddgst:-false} 00:09:43.208 }, 00:09:43.208 "method": "bdev_nvme_attach_controller" 00:09:43.208 } 00:09:43.208 EOF 00:09:43.208 )") 00:09:43.208 10:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:09:43.208 10:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:09:43.208 10:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:09:43.208 10:58:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:43.208 "params": { 00:09:43.208 "name": "Nvme0", 00:09:43.208 "trtype": "tcp", 00:09:43.208 "traddr": "10.0.0.2", 00:09:43.208 "adrfam": "ipv4", 00:09:43.208 "trsvcid": "4420", 00:09:43.208 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:43.208 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:43.208 "hdgst": false, 00:09:43.208 "ddgst": false 00:09:43.208 }, 00:09:43.208 "method": "bdev_nvme_attach_controller" 00:09:43.208 }' 00:09:43.208 [2024-07-26 10:58:02.593232] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:43.208 [2024-07-26 10:58:02.593281] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1314370 ] 00:09:43.208 EAL: No free 2048 kB hugepages reported on node 1 00:09:43.208 [2024-07-26 10:58:02.647724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.468 [2024-07-26 10:58:02.720704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.468 Running I/O for 1 seconds... 00:09:44.844 00:09:44.844 Latency(us) 00:09:44.844 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:44.844 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:44.844 Verification LBA range: start 0x0 length 0x400 00:09:44.844 Nvme0n1 : 1.01 889.38 55.59 0.00 0.00 71088.43 17552.25 65649.98 00:09:44.844 =================================================================================================================== 00:09:44.844 Total : 889.38 55.59 0.00 0.00 71088.43 17552.25 65649.98 00:09:44.844 10:58:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:44.844 10:58:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:44.844 10:58:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:09:44.844 10:58:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:44.844 10:58:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:44.844 10:58:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:44.844 10:58:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:09:44.844 10:58:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:44.844 10:58:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:09:44.844 10:58:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:44.844 10:58:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:44.844 rmmod nvme_tcp 00:09:44.844 rmmod nvme_fabrics 00:09:44.844 rmmod nvme_keyring 00:09:44.844 10:58:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:44.844 10:58:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:09:44.844 10:58:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:09:44.844 10:58:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1313848 ']' 00:09:44.844 10:58:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1313848 00:09:44.844 10:58:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 1313848 ']' 00:09:44.844 10:58:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 1313848 00:09:44.844 10:58:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:09:44.844 10:58:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:44.844 10:58:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1313848 00:09:44.844 10:58:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:44.844 10:58:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:44.844 10:58:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1313848' 00:09:44.844 killing process with pid 1313848 00:09:44.844 10:58:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 1313848 00:09:44.844 10:58:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 1313848 00:09:45.103 [2024-07-26 10:58:04.413665] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:45.103 10:58:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:45.103 10:58:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:45.103 10:58:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:45.103 10:58:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:45.103 10:58:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:45.103 10:58:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.103 10:58:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.103 10:58:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.049 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:47.049 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:47.049 00:09:47.049 real 0m11.834s 00:09:47.049 user 0m22.386s 00:09:47.049 sys 0m4.625s 00:09:47.049 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:47.049 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:47.049 ************************************ 00:09:47.049 END TEST nvmf_host_management 00:09:47.049 ************************************ 00:09:47.314 10:58:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:47.314 10:58:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:47.314 10:58:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:47.314 10:58:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:47.314 ************************************ 00:09:47.314 START TEST nvmf_lvol 00:09:47.314 ************************************ 00:09:47.314 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:47.314 * Looking for test storage... 00:09:47.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:47.314 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:47.314 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:47.314 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:47.314 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:47.314 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:47.314 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:47.314 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:47.314 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:47.314 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:47.314 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:47.314 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:47.314 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:47.314 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:47.314 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:47.314 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:47.314 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:47.314 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:47.314 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:47.315 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:47.315 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.315 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.315 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.315 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.315 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.315 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.315 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:47.315 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.315 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:09:47.315 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:47.315 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:47.315 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:47.315 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:47.315 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:47.315 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:47.315 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:47.315 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:47.315 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:47.315 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:47.315 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:47.315 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:47.315 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:47.315 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:47.315 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:47.315 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:47.315 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:47.315 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:47.315 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:47.315 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.315 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.315 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.315 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:47.315 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:47.315 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:09:47.315 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:52.599 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:52.600 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:52.600 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:52.600 Found net devices under 0000:86:00.0: cvl_0_0 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:52.600 Found net devices under 0000:86:00.1: cvl_0_1 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:52.600 10:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:52.600 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:52.600 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:52.600 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:52.600 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:52.860 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:52.860 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:52.860 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:52.860 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:52.860 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:09:52.860 00:09:52.860 --- 10.0.0.2 ping statistics --- 00:09:52.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.860 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:09:52.860 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:52.860 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:52.860 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.397 ms 00:09:52.860 00:09:52.860 --- 10.0.0.1 ping statistics --- 00:09:52.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.860 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:09:52.861 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:52.861 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:09:52.861 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:52.861 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:52.861 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:52.861 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:52.861 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:52.861 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:52.861 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:52.861 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:52.861 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:52.861 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:52.861 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:52.861 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1318138 00:09:52.861 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1318138 00:09:52.861 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:52.861 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 1318138 ']' 00:09:52.861 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.861 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:52.861 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.861 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:52.861 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:52.861 [2024-07-26 10:58:12.266187] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:52.861 [2024-07-26 10:58:12.266231] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:52.861 EAL: No free 2048 kB hugepages reported on node 1 00:09:52.861 [2024-07-26 10:58:12.324716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:53.120 [2024-07-26 10:58:12.405558] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:53.120 [2024-07-26 10:58:12.405595] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:53.120 [2024-07-26 10:58:12.405602] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:53.120 [2024-07-26 10:58:12.405609] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:53.120 [2024-07-26 10:58:12.405614] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:53.120 [2024-07-26 10:58:12.405656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.120 [2024-07-26 10:58:12.405751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.120 [2024-07-26 10:58:12.405753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:53.690 10:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:53.690 10:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:09:53.690 10:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:53.690 10:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:53.690 10:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:53.690 10:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:53.690 10:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:53.951 [2024-07-26 10:58:13.270296] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:53.951 10:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:54.210 10:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:54.210 10:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:54.210 10:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:54.210 10:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:54.468 10:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:54.728 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=5759d898-d972-43ca-8465-67bb17abe2fa 00:09:54.728 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5759d898-d972-43ca-8465-67bb17abe2fa lvol 20 00:09:54.987 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=1d5964c8-8d8a-45ab-ab7c-601492dfa57e 00:09:54.987 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:54.987 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1d5964c8-8d8a-45ab-ab7c-601492dfa57e 00:09:55.247 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:55.506 [2024-07-26 10:58:14.756059] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:55.506 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:55.506 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1318628 00:09:55.506 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:55.506 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:55.506 EAL: No free 2048 kB hugepages reported on node 1 00:09:56.892 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 1d5964c8-8d8a-45ab-ab7c-601492dfa57e MY_SNAPSHOT 00:09:56.892 10:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=a016d3c1-f38d-41ff-b9db-bdfd5de8df7e 00:09:56.892 10:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 1d5964c8-8d8a-45ab-ab7c-601492dfa57e 30 00:09:57.152 10:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone a016d3c1-f38d-41ff-b9db-bdfd5de8df7e MY_CLONE 00:09:57.152 10:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=1fa77df3-9ae6-4da2-9159-4acf3081c0c1 00:09:57.152 10:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 1fa77df3-9ae6-4da2-9159-4acf3081c0c1 00:09:57.721 10:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1318628 00:10:05.850 Initializing NVMe Controllers 00:10:05.850 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:05.850 Controller IO queue size 128, less than required. 00:10:05.850 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:05.850 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:05.850 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:05.850 Initialization complete. Launching workers. 00:10:05.850 ======================================================== 00:10:05.850 Latency(us) 00:10:05.850 Device Information : IOPS MiB/s Average min max 00:10:05.850 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11986.80 46.82 10684.94 1892.69 63869.34 00:10:05.850 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11872.90 46.38 10787.14 3568.22 70431.25 00:10:05.850 ======================================================== 00:10:05.850 Total : 23859.70 93.20 10735.80 1892.69 70431.25 00:10:05.850 00:10:05.850 10:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:06.109 10:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1d5964c8-8d8a-45ab-ab7c-601492dfa57e 00:10:06.370 10:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5759d898-d972-43ca-8465-67bb17abe2fa 00:10:06.370 10:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:06.631 10:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:06.631 10:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:06.631 10:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:06.631 10:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:10:06.631 10:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:06.631 10:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:10:06.631 10:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:06.631 10:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:06.631 rmmod nvme_tcp 00:10:06.631 rmmod nvme_fabrics 00:10:06.631 rmmod nvme_keyring 00:10:06.631 10:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:06.631 10:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:10:06.631 10:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:10:06.631 10:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1318138 ']' 00:10:06.631 10:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1318138 00:10:06.631 10:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 1318138 ']' 00:10:06.631 10:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 1318138 00:10:06.631 10:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:10:06.631 10:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:06.631 10:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1318138 00:10:06.631 10:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:06.631 10:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:06.631 10:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1318138' 00:10:06.631 killing process with pid 1318138 00:10:06.631 10:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 1318138 00:10:06.631 10:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 1318138 00:10:06.891 10:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:06.891 10:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:06.891 10:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:06.891 10:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:06.891 10:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:06.891 10:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.891 10:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:06.891 10:58:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.800 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:08.800 00:10:08.800 real 0m21.687s 00:10:08.800 user 1m3.928s 00:10:08.800 sys 0m6.875s 00:10:08.800 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:08.800 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:08.800 ************************************ 00:10:08.800 END TEST nvmf_lvol 00:10:08.800 ************************************ 00:10:09.061 10:58:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:09.061 10:58:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:09.061 10:58:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:09.061 10:58:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:09.061 ************************************ 00:10:09.061 START TEST nvmf_lvs_grow 00:10:09.061 ************************************ 00:10:09.061 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:09.061 * Looking for test storage... 00:10:09.061 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:09.061 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:09.061 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:09.061 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:09.061 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:09.061 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:09.061 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:09.061 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:09.061 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:09.061 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:09.061 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:09.061 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:09.061 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:09.061 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:09.061 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:09.061 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:09.061 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:09.061 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:09.061 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:09.061 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:09.061 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:09.061 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:09.061 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:09.061 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.062 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.062 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.062 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:09.062 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.062 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:10:09.062 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:09.062 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:09.062 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:09.062 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:09.062 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:09.062 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:09.062 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:09.062 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:09.062 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:09.062 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:09.062 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:09.062 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:09.062 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:09.062 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:09.062 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:09.062 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:09.062 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.062 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.062 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.062 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:09.062 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:09.062 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:10:09.062 10:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:14.383 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:14.383 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:14.383 Found net devices under 0000:86:00.0: cvl_0_0 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:14.383 Found net devices under 0000:86:00.1: cvl_0_1 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:14.383 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:14.384 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:14.384 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:14.384 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:14.644 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:14.644 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:14.644 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:14.644 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:14.644 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:10:14.644 00:10:14.644 --- 10.0.0.2 ping statistics --- 00:10:14.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.644 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:10:14.644 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:14.644 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:14.644 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:10:14.644 00:10:14.644 --- 10.0.0.1 ping statistics --- 00:10:14.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.644 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:10:14.644 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:14.644 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:10:14.644 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:14.644 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:14.644 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:14.644 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:14.644 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:14.644 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:14.644 10:58:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:14.644 10:58:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:14.644 10:58:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:14.644 10:58:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:14.644 10:58:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:14.644 10:58:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1323996 00:10:14.644 10:58:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1323996 00:10:14.644 10:58:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:14.644 10:58:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 1323996 ']' 00:10:14.644 10:58:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.644 10:58:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:14.644 10:58:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.644 10:58:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:14.644 10:58:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:14.644 [2024-07-26 10:58:34.062635] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:14.644 [2024-07-26 10:58:34.062677] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:14.644 EAL: No free 2048 kB hugepages reported on node 1 00:10:14.644 [2024-07-26 10:58:34.122417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.904 [2024-07-26 10:58:34.196947] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:14.904 [2024-07-26 10:58:34.196988] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:14.904 [2024-07-26 10:58:34.196995] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:14.904 [2024-07-26 10:58:34.197001] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:14.904 [2024-07-26 10:58:34.197006] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:14.904 [2024-07-26 10:58:34.197028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.472 10:58:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:15.472 10:58:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:10:15.472 10:58:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:15.472 10:58:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:15.472 10:58:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:15.473 10:58:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:15.473 10:58:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:15.732 [2024-07-26 10:58:35.064966] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:15.732 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:15.732 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:15.732 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:15.732 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:15.732 ************************************ 00:10:15.732 START TEST lvs_grow_clean 00:10:15.732 ************************************ 00:10:15.732 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:10:15.732 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:15.732 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:15.732 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:15.732 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:15.732 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:15.732 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:15.732 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:15.732 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:15.732 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:15.991 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:15.991 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:16.251 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=c4b7b9e6-9552-46ab-aab9-697ef6ec1b60 00:10:16.251 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4b7b9e6-9552-46ab-aab9-697ef6ec1b60 00:10:16.251 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:16.251 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:16.251 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:16.251 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c4b7b9e6-9552-46ab-aab9-697ef6ec1b60 lvol 150 00:10:16.511 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=e4729604-0de2-4354-bc68-c2b958119637 00:10:16.511 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:16.511 10:58:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:16.511 [2024-07-26 10:58:36.006838] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:16.511 [2024-07-26 10:58:36.006889] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:16.772 true 00:10:16.772 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4b7b9e6-9552-46ab-aab9-697ef6ec1b60 00:10:16.772 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:16.772 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:16.772 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:17.032 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e4729604-0de2-4354-bc68-c2b958119637 00:10:17.291 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:17.291 [2024-07-26 10:58:36.692906] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:17.291 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:17.551 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:17.551 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1324502 00:10:17.551 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:17.551 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1324502 /var/tmp/bdevperf.sock 00:10:17.551 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 1324502 ']' 00:10:17.551 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:17.551 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:17.551 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:17.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:17.551 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:17.551 10:58:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:17.551 [2024-07-26 10:58:36.907501] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:17.551 [2024-07-26 10:58:36.907546] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1324502 ] 00:10:17.551 EAL: No free 2048 kB hugepages reported on node 1 00:10:17.551 [2024-07-26 10:58:36.959898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.551 [2024-07-26 10:58:37.038606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:18.491 10:58:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:18.491 10:58:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:10:18.491 10:58:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:18.750 Nvme0n1 00:10:18.750 10:58:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:19.010 [ 00:10:19.010 { 00:10:19.010 "name": "Nvme0n1", 00:10:19.010 "aliases": [ 00:10:19.010 "e4729604-0de2-4354-bc68-c2b958119637" 00:10:19.010 ], 00:10:19.010 "product_name": "NVMe disk", 00:10:19.010 "block_size": 4096, 00:10:19.010 "num_blocks": 38912, 00:10:19.010 "uuid": "e4729604-0de2-4354-bc68-c2b958119637", 00:10:19.010 "assigned_rate_limits": { 00:10:19.010 "rw_ios_per_sec": 0, 00:10:19.010 "rw_mbytes_per_sec": 0, 00:10:19.010 "r_mbytes_per_sec": 0, 00:10:19.010 "w_mbytes_per_sec": 0 00:10:19.010 }, 00:10:19.010 "claimed": false, 00:10:19.010 "zoned": false, 00:10:19.010 "supported_io_types": { 00:10:19.010 "read": true, 00:10:19.010 "write": true, 00:10:19.010 "unmap": true, 00:10:19.010 "flush": true, 00:10:19.010 "reset": true, 00:10:19.010 "nvme_admin": true, 00:10:19.010 "nvme_io": true, 00:10:19.010 "nvme_io_md": false, 00:10:19.010 "write_zeroes": true, 00:10:19.010 "zcopy": false, 00:10:19.010 "get_zone_info": false, 00:10:19.010 "zone_management": false, 00:10:19.010 "zone_append": false, 00:10:19.010 "compare": true, 00:10:19.010 "compare_and_write": true, 00:10:19.010 "abort": true, 00:10:19.010 "seek_hole": false, 00:10:19.010 "seek_data": false, 00:10:19.010 "copy": true, 00:10:19.011 "nvme_iov_md": false 00:10:19.011 }, 00:10:19.011 "memory_domains": [ 00:10:19.011 { 00:10:19.011 "dma_device_id": "system", 00:10:19.011 "dma_device_type": 1 00:10:19.011 } 00:10:19.011 ], 00:10:19.011 "driver_specific": { 00:10:19.011 "nvme": [ 00:10:19.011 { 00:10:19.011 "trid": { 00:10:19.011 "trtype": "TCP", 00:10:19.011 "adrfam": "IPv4", 00:10:19.011 "traddr": "10.0.0.2", 00:10:19.011 "trsvcid": "4420", 00:10:19.011 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:19.011 }, 00:10:19.011 "ctrlr_data": { 00:10:19.011 "cntlid": 1, 00:10:19.011 "vendor_id": "0x8086", 00:10:19.011 "model_number": "SPDK bdev Controller", 00:10:19.011 "serial_number": "SPDK0", 00:10:19.011 "firmware_revision": "24.09", 00:10:19.011 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:19.011 "oacs": { 00:10:19.011 "security": 0, 00:10:19.011 "format": 0, 00:10:19.011 "firmware": 0, 00:10:19.011 "ns_manage": 0 00:10:19.011 }, 00:10:19.011 "multi_ctrlr": true, 00:10:19.011 "ana_reporting": false 00:10:19.011 }, 00:10:19.011 "vs": { 00:10:19.011 "nvme_version": "1.3" 00:10:19.011 }, 00:10:19.011 "ns_data": { 00:10:19.011 "id": 1, 00:10:19.011 "can_share": true 00:10:19.011 } 00:10:19.011 } 00:10:19.011 ], 00:10:19.011 "mp_policy": "active_passive" 00:10:19.011 } 00:10:19.011 } 00:10:19.011 ] 00:10:19.011 10:58:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1324737 00:10:19.011 10:58:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:19.011 10:58:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:19.011 Running I/O for 10 seconds... 00:10:19.948 Latency(us) 00:10:19.948 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:19.948 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:19.948 Nvme0n1 : 1.00 21309.00 83.24 0.00 0.00 0.00 0.00 0.00 00:10:19.948 =================================================================================================================== 00:10:19.948 Total : 21309.00 83.24 0.00 0.00 0.00 0.00 0.00 00:10:19.948 00:10:20.884 10:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c4b7b9e6-9552-46ab-aab9-697ef6ec1b60 00:10:21.143 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:21.143 Nvme0n1 : 2.00 21627.00 84.48 0.00 0.00 0.00 0.00 0.00 00:10:21.143 =================================================================================================================== 00:10:21.143 Total : 21627.00 84.48 0.00 0.00 0.00 0.00 0.00 00:10:21.143 00:10:21.143 true 00:10:21.143 10:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4b7b9e6-9552-46ab-aab9-697ef6ec1b60 00:10:21.143 10:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:21.403 10:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:21.403 10:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:21.403 10:58:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1324737 00:10:21.971 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:21.971 Nvme0n1 : 3.00 21673.00 84.66 0.00 0.00 0.00 0.00 0.00 00:10:21.971 =================================================================================================================== 00:10:21.971 Total : 21673.00 84.66 0.00 0.00 0.00 0.00 0.00 00:10:21.971 00:10:22.910 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:22.910 Nvme0n1 : 4.00 21743.00 84.93 0.00 0.00 0.00 0.00 0.00 00:10:22.910 =================================================================================================================== 00:10:22.910 Total : 21743.00 84.93 0.00 0.00 0.00 0.00 0.00 00:10:22.910 00:10:24.289 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:24.289 Nvme0n1 : 5.00 21789.60 85.12 0.00 0.00 0.00 0.00 0.00 00:10:24.289 =================================================================================================================== 00:10:24.289 Total : 21789.60 85.12 0.00 0.00 0.00 0.00 0.00 00:10:24.289 00:10:25.226 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:25.226 Nvme0n1 : 6.00 21836.50 85.30 0.00 0.00 0.00 0.00 0.00 00:10:25.226 =================================================================================================================== 00:10:25.226 Total : 21836.50 85.30 0.00 0.00 0.00 0.00 0.00 00:10:25.226 00:10:26.196 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:26.196 Nvme0n1 : 7.00 21864.57 85.41 0.00 0.00 0.00 0.00 0.00 00:10:26.196 =================================================================================================================== 00:10:26.196 Total : 21864.57 85.41 0.00 0.00 0.00 0.00 0.00 00:10:26.196 00:10:27.134 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:27.134 Nvme0n1 : 8.00 21906.88 85.57 0.00 0.00 0.00 0.00 0.00 00:10:27.134 =================================================================================================================== 00:10:27.134 Total : 21906.88 85.57 0.00 0.00 0.00 0.00 0.00 00:10:27.134 00:10:28.073 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:28.073 Nvme0n1 : 9.00 21976.22 85.84 0.00 0.00 0.00 0.00 0.00 00:10:28.073 =================================================================================================================== 00:10:28.073 Total : 21976.22 85.84 0.00 0.00 0.00 0.00 0.00 00:10:28.073 00:10:29.011 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:29.011 Nvme0n1 : 10.00 21968.70 85.82 0.00 0.00 0.00 0.00 0.00 00:10:29.011 =================================================================================================================== 00:10:29.011 Total : 21968.70 85.82 0.00 0.00 0.00 0.00 0.00 00:10:29.011 00:10:29.011 00:10:29.011 Latency(us) 00:10:29.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:29.011 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:29.011 Nvme0n1 : 10.00 21973.65 85.83 0.00 0.00 5821.81 2450.48 26670.30 00:10:29.011 =================================================================================================================== 00:10:29.011 Total : 21973.65 85.83 0.00 0.00 5821.81 2450.48 26670.30 00:10:29.011 0 00:10:29.011 10:58:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1324502 00:10:29.011 10:58:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 1324502 ']' 00:10:29.011 10:58:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 1324502 00:10:29.011 10:58:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:10:29.011 10:58:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:29.011 10:58:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1324502 00:10:29.011 10:58:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:29.011 10:58:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:29.011 10:58:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1324502' 00:10:29.011 killing process with pid 1324502 00:10:29.011 10:58:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 1324502 00:10:29.011 Received shutdown signal, test time was about 10.000000 seconds 00:10:29.011 00:10:29.011 Latency(us) 00:10:29.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:29.012 =================================================================================================================== 00:10:29.012 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:29.012 10:58:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 1324502 00:10:29.304 10:58:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:29.563 10:58:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:29.563 10:58:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4b7b9e6-9552-46ab-aab9-697ef6ec1b60 00:10:29.563 10:58:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:29.822 10:58:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:29.822 10:58:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:29.823 10:58:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:30.082 [2024-07-26 10:58:49.362577] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:30.082 10:58:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4b7b9e6-9552-46ab-aab9-697ef6ec1b60 00:10:30.082 10:58:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:10:30.082 10:58:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4b7b9e6-9552-46ab-aab9-697ef6ec1b60 00:10:30.082 10:58:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:30.082 10:58:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:30.082 10:58:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:30.082 10:58:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:30.082 10:58:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:30.082 10:58:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:30.082 10:58:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:30.082 10:58:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:30.082 10:58:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4b7b9e6-9552-46ab-aab9-697ef6ec1b60 00:10:30.082 request: 00:10:30.082 { 00:10:30.082 "uuid": "c4b7b9e6-9552-46ab-aab9-697ef6ec1b60", 00:10:30.082 "method": "bdev_lvol_get_lvstores", 00:10:30.082 "req_id": 1 00:10:30.082 } 00:10:30.082 Got JSON-RPC error response 00:10:30.082 response: 00:10:30.082 { 00:10:30.082 "code": -19, 00:10:30.082 "message": "No such device" 00:10:30.082 } 00:10:30.446 10:58:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:10:30.446 10:58:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:30.446 10:58:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:30.446 10:58:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:30.446 10:58:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:30.446 aio_bdev 00:10:30.446 10:58:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e4729604-0de2-4354-bc68-c2b958119637 00:10:30.446 10:58:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=e4729604-0de2-4354-bc68-c2b958119637 00:10:30.446 10:58:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:30.446 10:58:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:10:30.446 10:58:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:30.446 10:58:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:30.446 10:58:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:30.446 10:58:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e4729604-0de2-4354-bc68-c2b958119637 -t 2000 00:10:30.705 [ 00:10:30.705 { 00:10:30.705 "name": "e4729604-0de2-4354-bc68-c2b958119637", 00:10:30.705 "aliases": [ 00:10:30.705 "lvs/lvol" 00:10:30.705 ], 00:10:30.705 "product_name": "Logical Volume", 00:10:30.705 "block_size": 4096, 00:10:30.705 "num_blocks": 38912, 00:10:30.705 "uuid": "e4729604-0de2-4354-bc68-c2b958119637", 00:10:30.705 "assigned_rate_limits": { 00:10:30.705 "rw_ios_per_sec": 0, 00:10:30.705 "rw_mbytes_per_sec": 0, 00:10:30.705 "r_mbytes_per_sec": 0, 00:10:30.705 "w_mbytes_per_sec": 0 00:10:30.705 }, 00:10:30.705 "claimed": false, 00:10:30.705 "zoned": false, 00:10:30.705 "supported_io_types": { 00:10:30.705 "read": true, 00:10:30.705 "write": true, 00:10:30.705 "unmap": true, 00:10:30.705 "flush": false, 00:10:30.705 "reset": true, 00:10:30.705 "nvme_admin": false, 00:10:30.705 "nvme_io": false, 00:10:30.705 "nvme_io_md": false, 00:10:30.705 "write_zeroes": true, 00:10:30.705 "zcopy": false, 00:10:30.705 "get_zone_info": false, 00:10:30.705 "zone_management": false, 00:10:30.705 "zone_append": false, 00:10:30.705 "compare": false, 00:10:30.705 "compare_and_write": false, 00:10:30.705 "abort": false, 00:10:30.705 "seek_hole": true, 00:10:30.705 "seek_data": true, 00:10:30.705 "copy": false, 00:10:30.705 "nvme_iov_md": false 00:10:30.705 }, 00:10:30.705 "driver_specific": { 00:10:30.705 "lvol": { 00:10:30.705 "lvol_store_uuid": "c4b7b9e6-9552-46ab-aab9-697ef6ec1b60", 00:10:30.705 "base_bdev": "aio_bdev", 00:10:30.705 "thin_provision": false, 00:10:30.705 "num_allocated_clusters": 38, 00:10:30.705 "snapshot": false, 00:10:30.705 "clone": false, 00:10:30.705 "esnap_clone": false 00:10:30.705 } 00:10:30.705 } 00:10:30.705 } 00:10:30.705 ] 00:10:30.706 10:58:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:10:30.706 10:58:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4b7b9e6-9552-46ab-aab9-697ef6ec1b60 00:10:30.706 10:58:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:30.965 10:58:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:30.965 10:58:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c4b7b9e6-9552-46ab-aab9-697ef6ec1b60 00:10:30.965 10:58:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:31.224 10:58:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:31.224 10:58:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e4729604-0de2-4354-bc68-c2b958119637 00:10:31.224 10:58:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c4b7b9e6-9552-46ab-aab9-697ef6ec1b60 00:10:31.482 10:58:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:31.744 10:58:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:31.744 00:10:31.744 real 0m15.897s 00:10:31.744 user 0m15.460s 00:10:31.744 sys 0m1.571s 00:10:31.744 10:58:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:31.744 10:58:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:31.745 ************************************ 00:10:31.745 END TEST lvs_grow_clean 00:10:31.745 ************************************ 00:10:31.745 10:58:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:31.745 10:58:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:31.745 10:58:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:31.745 10:58:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:31.745 ************************************ 00:10:31.745 START TEST lvs_grow_dirty 00:10:31.745 ************************************ 00:10:31.745 10:58:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:10:31.745 10:58:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:31.745 10:58:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:31.745 10:58:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:31.745 10:58:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:31.745 10:58:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:31.745 10:58:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:31.745 10:58:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:31.745 10:58:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:31.745 10:58:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:32.004 10:58:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:32.004 10:58:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:32.004 10:58:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=db015730-9c8e-40f5-9303-f019f471f211 00:10:32.004 10:58:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db015730-9c8e-40f5-9303-f019f471f211 00:10:32.004 10:58:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:32.264 10:58:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:32.264 10:58:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:32.264 10:58:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u db015730-9c8e-40f5-9303-f019f471f211 lvol 150 00:10:32.524 10:58:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=57db412e-025c-4eaa-b2bc-f622cbb28e3a 00:10:32.524 10:58:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:32.524 10:58:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:32.524 [2024-07-26 10:58:51.981797] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:32.524 [2024-07-26 10:58:51.981848] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:32.524 true 00:10:32.524 10:58:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db015730-9c8e-40f5-9303-f019f471f211 00:10:32.524 10:58:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:32.784 10:58:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:32.784 10:58:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:33.044 10:58:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 57db412e-025c-4eaa-b2bc-f622cbb28e3a 00:10:33.044 10:58:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:33.303 [2024-07-26 10:58:52.671852] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:33.304 10:58:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:33.564 10:58:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1327323 00:10:33.564 10:58:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:33.564 10:58:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:33.564 10:58:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1327323 /var/tmp/bdevperf.sock 00:10:33.564 10:58:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1327323 ']' 00:10:33.564 10:58:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:33.564 10:58:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:33.564 10:58:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:33.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:33.564 10:58:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:33.564 10:58:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:33.564 [2024-07-26 10:58:52.896726] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:33.564 [2024-07-26 10:58:52.896774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1327323 ] 00:10:33.564 EAL: No free 2048 kB hugepages reported on node 1 00:10:33.564 [2024-07-26 10:58:52.947622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.564 [2024-07-26 10:58:53.019524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:34.502 10:58:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:34.502 10:58:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:10:34.502 10:58:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:34.761 Nvme0n1 00:10:34.761 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:34.761 [ 00:10:34.761 { 00:10:34.761 "name": "Nvme0n1", 00:10:34.761 "aliases": [ 00:10:34.761 "57db412e-025c-4eaa-b2bc-f622cbb28e3a" 00:10:34.761 ], 00:10:34.761 "product_name": "NVMe disk", 00:10:34.761 "block_size": 4096, 00:10:34.761 "num_blocks": 38912, 00:10:34.761 "uuid": "57db412e-025c-4eaa-b2bc-f622cbb28e3a", 00:10:34.761 "assigned_rate_limits": { 00:10:34.761 "rw_ios_per_sec": 0, 00:10:34.761 "rw_mbytes_per_sec": 0, 00:10:34.761 "r_mbytes_per_sec": 0, 00:10:34.761 "w_mbytes_per_sec": 0 00:10:34.761 }, 00:10:34.761 "claimed": false, 00:10:34.761 "zoned": false, 00:10:34.761 "supported_io_types": { 00:10:34.761 "read": true, 00:10:34.761 "write": true, 00:10:34.761 "unmap": true, 00:10:34.761 "flush": true, 00:10:34.761 "reset": true, 00:10:34.761 "nvme_admin": true, 00:10:34.761 "nvme_io": true, 00:10:34.762 "nvme_io_md": false, 00:10:34.762 "write_zeroes": true, 00:10:34.762 "zcopy": false, 00:10:34.762 "get_zone_info": false, 00:10:34.762 "zone_management": false, 00:10:34.762 "zone_append": false, 00:10:34.762 "compare": true, 00:10:34.762 "compare_and_write": true, 00:10:34.762 "abort": true, 00:10:34.762 "seek_hole": false, 00:10:34.762 "seek_data": false, 00:10:34.762 "copy": true, 00:10:34.762 "nvme_iov_md": false 00:10:34.762 }, 00:10:34.762 "memory_domains": [ 00:10:34.762 { 00:10:34.762 "dma_device_id": "system", 00:10:34.762 "dma_device_type": 1 00:10:34.762 } 00:10:34.762 ], 00:10:34.762 "driver_specific": { 00:10:34.762 "nvme": [ 00:10:34.762 { 00:10:34.762 "trid": { 00:10:34.762 "trtype": "TCP", 00:10:34.762 "adrfam": "IPv4", 00:10:34.762 "traddr": "10.0.0.2", 00:10:34.762 "trsvcid": "4420", 00:10:34.762 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:34.762 }, 00:10:34.762 "ctrlr_data": { 00:10:34.762 "cntlid": 1, 00:10:34.762 "vendor_id": "0x8086", 00:10:34.762 "model_number": "SPDK bdev Controller", 00:10:34.762 "serial_number": "SPDK0", 00:10:34.762 "firmware_revision": "24.09", 00:10:34.762 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:34.762 "oacs": { 00:10:34.762 "security": 0, 00:10:34.762 "format": 0, 00:10:34.762 "firmware": 0, 00:10:34.762 "ns_manage": 0 00:10:34.762 }, 00:10:34.762 "multi_ctrlr": true, 00:10:34.762 "ana_reporting": false 00:10:34.762 }, 00:10:34.762 "vs": { 00:10:34.762 "nvme_version": "1.3" 00:10:34.762 }, 00:10:34.762 "ns_data": { 00:10:34.762 "id": 1, 00:10:34.762 "can_share": true 00:10:34.762 } 00:10:34.762 } 00:10:34.762 ], 00:10:34.762 "mp_policy": "active_passive" 00:10:34.762 } 00:10:34.762 } 00:10:34.762 ] 00:10:35.021 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1327559 00:10:35.021 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:35.021 10:58:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:35.021 Running I/O for 10 seconds... 00:10:35.962 Latency(us) 00:10:35.962 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:35.962 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:35.962 Nvme0n1 : 1.00 21240.00 82.97 0.00 0.00 0.00 0.00 0.00 00:10:35.962 =================================================================================================================== 00:10:35.962 Total : 21240.00 82.97 0.00 0.00 0.00 0.00 0.00 00:10:35.962 00:10:36.900 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u db015730-9c8e-40f5-9303-f019f471f211 00:10:36.900 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:36.900 Nvme0n1 : 2.00 21591.00 84.34 0.00 0.00 0.00 0.00 0.00 00:10:36.900 =================================================================================================================== 00:10:36.900 Total : 21591.00 84.34 0.00 0.00 0.00 0.00 0.00 00:10:36.900 00:10:37.160 true 00:10:37.160 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db015730-9c8e-40f5-9303-f019f471f211 00:10:37.160 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:37.160 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:37.160 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:37.160 10:58:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1327559 00:10:38.099 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:38.099 Nvme0n1 : 3.00 21671.00 84.65 0.00 0.00 0.00 0.00 0.00 00:10:38.099 =================================================================================================================== 00:10:38.099 Total : 21671.00 84.65 0.00 0.00 0.00 0.00 0.00 00:10:38.099 00:10:39.036 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:39.036 Nvme0n1 : 4.00 21983.75 85.87 0.00 0.00 0.00 0.00 0.00 00:10:39.036 =================================================================================================================== 00:10:39.036 Total : 21983.75 85.87 0.00 0.00 0.00 0.00 0.00 00:10:39.036 00:10:39.977 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:39.977 Nvme0n1 : 5.00 21995.00 85.92 0.00 0.00 0.00 0.00 0.00 00:10:39.977 =================================================================================================================== 00:10:39.977 Total : 21995.00 85.92 0.00 0.00 0.00 0.00 0.00 00:10:39.977 00:10:40.916 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:40.916 Nvme0n1 : 6.00 21991.83 85.91 0.00 0.00 0.00 0.00 0.00 00:10:40.916 =================================================================================================================== 00:10:40.916 Total : 21991.83 85.91 0.00 0.00 0.00 0.00 0.00 00:10:40.916 00:10:42.296 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:42.296 Nvme0n1 : 7.00 22031.86 86.06 0.00 0.00 0.00 0.00 0.00 00:10:42.296 =================================================================================================================== 00:10:42.296 Total : 22031.86 86.06 0.00 0.00 0.00 0.00 0.00 00:10:42.296 00:10:43.235 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:43.235 Nvme0n1 : 8.00 22084.62 86.27 0.00 0.00 0.00 0.00 0.00 00:10:43.235 =================================================================================================================== 00:10:43.235 Total : 22084.62 86.27 0.00 0.00 0.00 0.00 0.00 00:10:43.235 00:10:44.175 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:44.175 Nvme0n1 : 9.00 22086.11 86.27 0.00 0.00 0.00 0.00 0.00 00:10:44.175 =================================================================================================================== 00:10:44.175 Total : 22086.11 86.27 0.00 0.00 0.00 0.00 0.00 00:10:44.175 00:10:45.112 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:45.112 Nvme0n1 : 10.00 22058.50 86.17 0.00 0.00 0.00 0.00 0.00 00:10:45.112 =================================================================================================================== 00:10:45.112 Total : 22058.50 86.17 0.00 0.00 0.00 0.00 0.00 00:10:45.112 00:10:45.112 00:10:45.112 Latency(us) 00:10:45.112 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:45.112 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:45.112 Nvme0n1 : 10.01 22055.62 86.15 0.00 0.00 5799.26 3105.84 30317.52 00:10:45.112 =================================================================================================================== 00:10:45.112 Total : 22055.62 86.15 0.00 0.00 5799.26 3105.84 30317.52 00:10:45.112 0 00:10:45.112 10:59:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1327323 00:10:45.112 10:59:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 1327323 ']' 00:10:45.112 10:59:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 1327323 00:10:45.112 10:59:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:10:45.112 10:59:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:45.112 10:59:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1327323 00:10:45.112 10:59:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:45.112 10:59:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:45.112 10:59:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1327323' 00:10:45.112 killing process with pid 1327323 00:10:45.112 10:59:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 1327323 00:10:45.112 Received shutdown signal, test time was about 10.000000 seconds 00:10:45.112 00:10:45.112 Latency(us) 00:10:45.112 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:45.112 =================================================================================================================== 00:10:45.112 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:45.112 10:59:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 1327323 00:10:45.371 10:59:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:45.372 10:59:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:45.632 10:59:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db015730-9c8e-40f5-9303-f019f471f211 00:10:45.632 10:59:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:45.892 10:59:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:45.892 10:59:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:45.892 10:59:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1323996 00:10:45.892 10:59:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1323996 00:10:45.892 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1323996 Killed "${NVMF_APP[@]}" "$@" 00:10:45.892 10:59:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:45.892 10:59:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:45.892 10:59:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:45.892 10:59:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:45.892 10:59:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:45.892 10:59:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1329409 00:10:45.892 10:59:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1329409 00:10:45.892 10:59:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:45.892 10:59:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1329409 ']' 00:10:45.892 10:59:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.892 10:59:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:45.892 10:59:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.892 10:59:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:45.892 10:59:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:45.892 [2024-07-26 10:59:05.284791] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:45.892 [2024-07-26 10:59:05.284841] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:45.892 EAL: No free 2048 kB hugepages reported on node 1 00:10:45.892 [2024-07-26 10:59:05.341774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.151 [2024-07-26 10:59:05.422135] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:46.151 [2024-07-26 10:59:05.422167] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:46.151 [2024-07-26 10:59:05.422174] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:46.151 [2024-07-26 10:59:05.422180] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:46.151 [2024-07-26 10:59:05.422186] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:46.151 [2024-07-26 10:59:05.422203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.720 10:59:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:46.720 10:59:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:10:46.720 10:59:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:46.720 10:59:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:46.720 10:59:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:46.720 10:59:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:46.720 10:59:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:46.981 [2024-07-26 10:59:06.271534] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:46.981 [2024-07-26 10:59:06.271613] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:46.981 [2024-07-26 10:59:06.271638] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:46.981 10:59:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:46.981 10:59:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 57db412e-025c-4eaa-b2bc-f622cbb28e3a 00:10:46.981 10:59:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=57db412e-025c-4eaa-b2bc-f622cbb28e3a 00:10:46.981 10:59:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:46.981 10:59:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:10:46.981 10:59:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:46.981 10:59:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:46.981 10:59:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:46.981 10:59:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 57db412e-025c-4eaa-b2bc-f622cbb28e3a -t 2000 00:10:47.241 [ 00:10:47.241 { 00:10:47.241 "name": "57db412e-025c-4eaa-b2bc-f622cbb28e3a", 00:10:47.241 "aliases": [ 00:10:47.241 "lvs/lvol" 00:10:47.241 ], 00:10:47.241 "product_name": "Logical Volume", 00:10:47.241 "block_size": 4096, 00:10:47.241 "num_blocks": 38912, 00:10:47.241 "uuid": "57db412e-025c-4eaa-b2bc-f622cbb28e3a", 00:10:47.241 "assigned_rate_limits": { 00:10:47.241 "rw_ios_per_sec": 0, 00:10:47.241 "rw_mbytes_per_sec": 0, 00:10:47.241 "r_mbytes_per_sec": 0, 00:10:47.241 "w_mbytes_per_sec": 0 00:10:47.241 }, 00:10:47.241 "claimed": false, 00:10:47.241 "zoned": false, 00:10:47.241 "supported_io_types": { 00:10:47.241 "read": true, 00:10:47.241 "write": true, 00:10:47.241 "unmap": true, 00:10:47.241 "flush": false, 00:10:47.241 "reset": true, 00:10:47.241 "nvme_admin": false, 00:10:47.241 "nvme_io": false, 00:10:47.241 "nvme_io_md": false, 00:10:47.241 "write_zeroes": true, 00:10:47.241 "zcopy": false, 00:10:47.241 "get_zone_info": false, 00:10:47.241 "zone_management": false, 00:10:47.241 "zone_append": false, 00:10:47.241 "compare": false, 00:10:47.241 "compare_and_write": false, 00:10:47.241 "abort": false, 00:10:47.241 "seek_hole": true, 00:10:47.241 "seek_data": true, 00:10:47.241 "copy": false, 00:10:47.241 "nvme_iov_md": false 00:10:47.241 }, 00:10:47.241 "driver_specific": { 00:10:47.241 "lvol": { 00:10:47.241 "lvol_store_uuid": "db015730-9c8e-40f5-9303-f019f471f211", 00:10:47.241 "base_bdev": "aio_bdev", 00:10:47.241 "thin_provision": false, 00:10:47.241 "num_allocated_clusters": 38, 00:10:47.241 "snapshot": false, 00:10:47.241 "clone": false, 00:10:47.241 "esnap_clone": false 00:10:47.241 } 00:10:47.241 } 00:10:47.241 } 00:10:47.241 ] 00:10:47.241 10:59:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:10:47.241 10:59:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db015730-9c8e-40f5-9303-f019f471f211 00:10:47.241 10:59:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:47.501 10:59:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:47.501 10:59:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db015730-9c8e-40f5-9303-f019f471f211 00:10:47.501 10:59:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:47.501 10:59:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:47.501 10:59:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:47.804 [2024-07-26 10:59:07.135996] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:47.804 10:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db015730-9c8e-40f5-9303-f019f471f211 00:10:47.804 10:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:10:47.804 10:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db015730-9c8e-40f5-9303-f019f471f211 00:10:47.804 10:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:47.804 10:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:47.804 10:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:47.804 10:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:47.804 10:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:47.804 10:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:47.804 10:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:47.804 10:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:47.804 10:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db015730-9c8e-40f5-9303-f019f471f211 00:10:48.064 request: 00:10:48.064 { 00:10:48.064 "uuid": "db015730-9c8e-40f5-9303-f019f471f211", 00:10:48.064 "method": "bdev_lvol_get_lvstores", 00:10:48.064 "req_id": 1 00:10:48.064 } 00:10:48.064 Got JSON-RPC error response 00:10:48.064 response: 00:10:48.064 { 00:10:48.064 "code": -19, 00:10:48.064 "message": "No such device" 00:10:48.064 } 00:10:48.064 10:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:10:48.064 10:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:48.064 10:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:48.064 10:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:48.064 10:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:48.064 aio_bdev 00:10:48.064 10:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 57db412e-025c-4eaa-b2bc-f622cbb28e3a 00:10:48.064 10:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=57db412e-025c-4eaa-b2bc-f622cbb28e3a 00:10:48.064 10:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:48.064 10:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:10:48.064 10:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:48.064 10:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:48.064 10:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:48.324 10:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 57db412e-025c-4eaa-b2bc-f622cbb28e3a -t 2000 00:10:48.585 [ 00:10:48.585 { 00:10:48.585 "name": "57db412e-025c-4eaa-b2bc-f622cbb28e3a", 00:10:48.585 "aliases": [ 00:10:48.585 "lvs/lvol" 00:10:48.585 ], 00:10:48.585 "product_name": "Logical Volume", 00:10:48.585 "block_size": 4096, 00:10:48.585 "num_blocks": 38912, 00:10:48.585 "uuid": "57db412e-025c-4eaa-b2bc-f622cbb28e3a", 00:10:48.585 "assigned_rate_limits": { 00:10:48.585 "rw_ios_per_sec": 0, 00:10:48.586 "rw_mbytes_per_sec": 0, 00:10:48.586 "r_mbytes_per_sec": 0, 00:10:48.586 "w_mbytes_per_sec": 0 00:10:48.586 }, 00:10:48.586 "claimed": false, 00:10:48.586 "zoned": false, 00:10:48.586 "supported_io_types": { 00:10:48.586 "read": true, 00:10:48.586 "write": true, 00:10:48.586 "unmap": true, 00:10:48.586 "flush": false, 00:10:48.586 "reset": true, 00:10:48.586 "nvme_admin": false, 00:10:48.586 "nvme_io": false, 00:10:48.586 "nvme_io_md": false, 00:10:48.586 "write_zeroes": true, 00:10:48.586 "zcopy": false, 00:10:48.586 "get_zone_info": false, 00:10:48.586 "zone_management": false, 00:10:48.586 "zone_append": false, 00:10:48.586 "compare": false, 00:10:48.586 "compare_and_write": false, 00:10:48.586 "abort": false, 00:10:48.586 "seek_hole": true, 00:10:48.586 "seek_data": true, 00:10:48.586 "copy": false, 00:10:48.586 "nvme_iov_md": false 00:10:48.586 }, 00:10:48.586 "driver_specific": { 00:10:48.586 "lvol": { 00:10:48.586 "lvol_store_uuid": "db015730-9c8e-40f5-9303-f019f471f211", 00:10:48.586 "base_bdev": "aio_bdev", 00:10:48.586 "thin_provision": false, 00:10:48.586 "num_allocated_clusters": 38, 00:10:48.586 "snapshot": false, 00:10:48.586 "clone": false, 00:10:48.586 "esnap_clone": false 00:10:48.586 } 00:10:48.586 } 00:10:48.586 } 00:10:48.586 ] 00:10:48.586 10:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:10:48.586 10:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db015730-9c8e-40f5-9303-f019f471f211 00:10:48.586 10:59:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:48.586 10:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:48.586 10:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db015730-9c8e-40f5-9303-f019f471f211 00:10:48.586 10:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:48.847 10:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:48.847 10:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 57db412e-025c-4eaa-b2bc-f622cbb28e3a 00:10:49.107 10:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u db015730-9c8e-40f5-9303-f019f471f211 00:10:49.107 10:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:49.367 10:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:49.367 00:10:49.367 real 0m17.670s 00:10:49.367 user 0m45.183s 00:10:49.367 sys 0m4.019s 00:10:49.367 10:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:49.367 10:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:49.367 ************************************ 00:10:49.367 END TEST lvs_grow_dirty 00:10:49.367 ************************************ 00:10:49.367 10:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:49.367 10:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:10:49.367 10:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:10:49.367 10:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:10:49.367 10:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:49.367 10:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:10:49.367 10:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:10:49.367 10:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:10:49.367 10:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:49.367 nvmf_trace.0 00:10:49.367 10:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:10:49.367 10:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:49.367 10:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:49.367 10:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:10:49.367 10:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:49.367 10:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:10:49.367 10:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:49.367 10:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:49.367 rmmod nvme_tcp 00:10:49.367 rmmod nvme_fabrics 00:10:49.627 rmmod nvme_keyring 00:10:49.627 10:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:49.627 10:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:10:49.627 10:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:10:49.627 10:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1329409 ']' 00:10:49.627 10:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1329409 00:10:49.627 10:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 1329409 ']' 00:10:49.627 10:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 1329409 00:10:49.627 10:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:10:49.627 10:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:49.627 10:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1329409 00:10:49.627 10:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:49.627 10:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:49.627 10:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1329409' 00:10:49.627 killing process with pid 1329409 00:10:49.627 10:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 1329409 00:10:49.627 10:59:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 1329409 00:10:49.627 10:59:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:49.627 10:59:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:49.627 10:59:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:49.627 10:59:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:49.627 10:59:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:49.627 10:59:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.627 10:59:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:49.627 10:59:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:52.169 00:10:52.169 real 0m42.843s 00:10:52.169 user 1m6.494s 00:10:52.169 sys 0m10.153s 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:52.169 ************************************ 00:10:52.169 END TEST nvmf_lvs_grow 00:10:52.169 ************************************ 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:52.169 ************************************ 00:10:52.169 START TEST nvmf_bdev_io_wait 00:10:52.169 ************************************ 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:52.169 * Looking for test storage... 00:10:52.169 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:52.169 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:52.170 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:10:52.170 10:59:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:57.451 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:57.451 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:57.451 Found net devices under 0000:86:00.0: cvl_0_0 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:57.451 Found net devices under 0000:86:00.1: cvl_0_1 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:57.451 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:57.451 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:10:57.451 00:10:57.451 --- 10.0.0.2 ping statistics --- 00:10:57.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.451 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:10:57.451 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:57.451 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:57.451 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.343 ms 00:10:57.451 00:10:57.451 --- 10.0.0.1 ping statistics --- 00:10:57.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.451 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:10:57.452 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:57.452 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:10:57.452 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:57.452 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:57.452 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:57.452 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:57.452 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:57.452 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:57.452 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:57.452 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:57.452 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:57.452 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:57.452 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:57.452 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1333459 00:10:57.452 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:57.452 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1333459 00:10:57.452 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 1333459 ']' 00:10:57.452 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.452 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:57.452 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.452 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:57.452 10:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:57.712 [2024-07-26 10:59:16.950977] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:57.712 [2024-07-26 10:59:16.951022] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:57.712 EAL: No free 2048 kB hugepages reported on node 1 00:10:57.712 [2024-07-26 10:59:17.011167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:57.712 [2024-07-26 10:59:17.090471] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:57.712 [2024-07-26 10:59:17.090509] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:57.712 [2024-07-26 10:59:17.090516] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:57.712 [2024-07-26 10:59:17.090522] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:57.712 [2024-07-26 10:59:17.090527] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:57.712 [2024-07-26 10:59:17.094062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:57.712 [2024-07-26 10:59:17.094094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:57.712 [2024-07-26 10:59:17.094184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:57.712 [2024-07-26 10:59:17.094185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.283 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:58.283 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:10:58.283 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:58.283 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:58.283 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:58.544 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:58.544 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:58.544 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.544 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:58.544 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.544 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:58.544 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.544 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:58.544 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.544 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:58.544 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.544 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:58.544 [2024-07-26 10:59:17.860381] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:58.544 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.544 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:58.544 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.544 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:58.544 Malloc0 00:10:58.544 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.544 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:58.544 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.544 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:58.544 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.544 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:58.544 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.544 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:58.544 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.544 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:58.544 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.544 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:58.544 [2024-07-26 10:59:17.915590] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:58.544 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.544 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1333710 00:10:58.544 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:58.544 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:58.544 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1333712 00:10:58.544 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:58.544 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:58.544 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:58.544 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:58.544 { 00:10:58.544 "params": { 00:10:58.544 "name": "Nvme$subsystem", 00:10:58.544 "trtype": "$TEST_TRANSPORT", 00:10:58.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:58.544 "adrfam": "ipv4", 00:10:58.544 "trsvcid": "$NVMF_PORT", 00:10:58.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:58.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:58.544 "hdgst": ${hdgst:-false}, 00:10:58.544 "ddgst": ${ddgst:-false} 00:10:58.544 }, 00:10:58.544 "method": "bdev_nvme_attach_controller" 00:10:58.544 } 00:10:58.544 EOF 00:10:58.544 )") 00:10:58.544 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:58.544 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1333714 00:10:58.544 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:58.544 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:58.544 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:58.544 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:58.545 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:58.545 { 00:10:58.545 "params": { 00:10:58.545 "name": "Nvme$subsystem", 00:10:58.545 "trtype": "$TEST_TRANSPORT", 00:10:58.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:58.545 "adrfam": "ipv4", 00:10:58.545 "trsvcid": "$NVMF_PORT", 00:10:58.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:58.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:58.545 "hdgst": ${hdgst:-false}, 00:10:58.545 "ddgst": ${ddgst:-false} 00:10:58.545 }, 00:10:58.545 "method": "bdev_nvme_attach_controller" 00:10:58.545 } 00:10:58.545 EOF 00:10:58.545 )") 00:10:58.545 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:58.545 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1333717 00:10:58.545 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:58.545 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:58.545 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:58.545 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:58.545 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:58.545 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:58.545 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:58.545 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:58.545 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:58.545 { 00:10:58.545 "params": { 00:10:58.545 "name": "Nvme$subsystem", 00:10:58.545 "trtype": "$TEST_TRANSPORT", 00:10:58.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:58.545 "adrfam": "ipv4", 00:10:58.545 "trsvcid": "$NVMF_PORT", 00:10:58.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:58.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:58.545 "hdgst": ${hdgst:-false}, 00:10:58.545 "ddgst": ${ddgst:-false} 00:10:58.545 }, 00:10:58.545 "method": "bdev_nvme_attach_controller" 00:10:58.545 } 00:10:58.545 EOF 00:10:58.545 )") 00:10:58.545 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:58.545 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:58.545 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:58.545 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:58.545 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:58.545 { 00:10:58.545 "params": { 00:10:58.545 "name": "Nvme$subsystem", 00:10:58.545 "trtype": "$TEST_TRANSPORT", 00:10:58.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:58.545 "adrfam": "ipv4", 00:10:58.545 "trsvcid": "$NVMF_PORT", 00:10:58.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:58.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:58.545 "hdgst": ${hdgst:-false}, 00:10:58.545 "ddgst": ${ddgst:-false} 00:10:58.545 }, 00:10:58.545 "method": "bdev_nvme_attach_controller" 00:10:58.545 } 00:10:58.545 EOF 00:10:58.545 )") 00:10:58.545 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:58.545 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1333710 00:10:58.545 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:58.545 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:58.545 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:58.545 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:58.545 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:58.545 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:58.545 "params": { 00:10:58.545 "name": "Nvme1", 00:10:58.545 "trtype": "tcp", 00:10:58.545 "traddr": "10.0.0.2", 00:10:58.545 "adrfam": "ipv4", 00:10:58.545 "trsvcid": "4420", 00:10:58.545 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:58.545 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:58.545 "hdgst": false, 00:10:58.545 "ddgst": false 00:10:58.545 }, 00:10:58.545 "method": "bdev_nvme_attach_controller" 00:10:58.545 }' 00:10:58.545 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:58.545 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:58.545 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:58.545 "params": { 00:10:58.545 "name": "Nvme1", 00:10:58.545 "trtype": "tcp", 00:10:58.545 "traddr": "10.0.0.2", 00:10:58.545 "adrfam": "ipv4", 00:10:58.545 "trsvcid": "4420", 00:10:58.545 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:58.545 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:58.545 "hdgst": false, 00:10:58.545 "ddgst": false 00:10:58.545 }, 00:10:58.545 "method": "bdev_nvme_attach_controller" 00:10:58.545 }' 00:10:58.545 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:58.545 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:58.545 "params": { 00:10:58.545 "name": "Nvme1", 00:10:58.545 "trtype": "tcp", 00:10:58.545 "traddr": "10.0.0.2", 00:10:58.545 "adrfam": "ipv4", 00:10:58.545 "trsvcid": "4420", 00:10:58.545 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:58.545 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:58.545 "hdgst": false, 00:10:58.545 "ddgst": false 00:10:58.545 }, 00:10:58.545 "method": "bdev_nvme_attach_controller" 00:10:58.545 }' 00:10:58.545 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:58.545 10:59:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:58.545 "params": { 00:10:58.545 "name": "Nvme1", 00:10:58.545 "trtype": "tcp", 00:10:58.545 "traddr": "10.0.0.2", 00:10:58.545 "adrfam": "ipv4", 00:10:58.545 "trsvcid": "4420", 00:10:58.545 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:58.545 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:58.545 "hdgst": false, 00:10:58.545 "ddgst": false 00:10:58.545 }, 00:10:58.545 "method": "bdev_nvme_attach_controller" 00:10:58.545 }' 00:10:58.545 [2024-07-26 10:59:17.965451] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:58.545 [2024-07-26 10:59:17.965452] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:58.545 [2024-07-26 10:59:17.965500] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-26 10:59:17.965501] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:58.545 --proc-type=auto ] 00:10:58.545 [2024-07-26 10:59:17.966525] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:58.545 [2024-07-26 10:59:17.966569] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:58.545 [2024-07-26 10:59:17.966656] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:58.545 [2024-07-26 10:59:17.966699] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:58.545 EAL: No free 2048 kB hugepages reported on node 1 00:10:58.806 EAL: No free 2048 kB hugepages reported on node 1 00:10:58.806 [2024-07-26 10:59:18.153533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.806 EAL: No free 2048 kB hugepages reported on node 1 00:10:58.806 [2024-07-26 10:59:18.231477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:58.807 [2024-07-26 10:59:18.246453] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.066 EAL: No free 2048 kB hugepages reported on node 1 00:10:59.066 [2024-07-26 10:59:18.324843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:10:59.066 [2024-07-26 10:59:18.346226] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.066 [2024-07-26 10:59:18.388087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.066 [2024-07-26 10:59:18.434888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:59.066 [2024-07-26 10:59:18.464013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:59.066 Running I/O for 1 seconds... 00:10:59.326 Running I/O for 1 seconds... 00:10:59.326 Running I/O for 1 seconds... 00:10:59.326 Running I/O for 1 seconds... 00:11:00.267 00:11:00.267 Latency(us) 00:11:00.267 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:00.267 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:11:00.267 Nvme1n1 : 1.01 7578.12 29.60 0.00 0.00 16769.77 8035.28 37156.06 00:11:00.267 =================================================================================================================== 00:11:00.267 Total : 7578.12 29.60 0.00 0.00 16769.77 8035.28 37156.06 00:11:00.267 00:11:00.267 Latency(us) 00:11:00.267 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:00.267 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:11:00.267 Nvme1n1 : 1.00 245117.82 957.49 0.00 0.00 520.34 211.92 641.11 00:11:00.267 =================================================================================================================== 00:11:00.267 Total : 245117.82 957.49 0.00 0.00 520.34 211.92 641.11 00:11:00.267 00:11:00.267 Latency(us) 00:11:00.267 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:00.267 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:11:00.267 Nvme1n1 : 1.01 8315.73 32.48 0.00 0.00 15339.88 6297.15 26442.35 00:11:00.267 =================================================================================================================== 00:11:00.267 Total : 8315.73 32.48 0.00 0.00 15339.88 6297.15 26442.35 00:11:00.267 00:11:00.267 Latency(us) 00:11:00.267 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:00.267 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:11:00.267 Nvme1n1 : 1.01 11767.29 45.97 0.00 0.00 10847.07 3647.22 18122.13 00:11:00.267 =================================================================================================================== 00:11:00.267 Total : 11767.29 45.97 0.00 0.00 10847.07 3647.22 18122.13 00:11:00.267 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1333712 00:11:00.527 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1333714 00:11:00.527 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1333717 00:11:00.527 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:00.527 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.527 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:00.527 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.527 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:11:00.527 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:11:00.527 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:00.527 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:11:00.527 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:00.527 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:11:00.527 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:00.527 10:59:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:00.527 rmmod nvme_tcp 00:11:00.527 rmmod nvme_fabrics 00:11:00.527 rmmod nvme_keyring 00:11:00.787 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:00.787 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:11:00.787 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:11:00.787 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1333459 ']' 00:11:00.787 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1333459 00:11:00.787 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 1333459 ']' 00:11:00.787 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 1333459 00:11:00.787 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:11:00.787 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:00.787 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1333459 00:11:00.787 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:00.787 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:00.787 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1333459' 00:11:00.787 killing process with pid 1333459 00:11:00.787 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 1333459 00:11:00.787 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 1333459 00:11:00.787 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:00.787 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:00.788 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:00.788 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:00.788 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:00.788 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.788 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.788 10:59:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:03.356 00:11:03.356 real 0m11.081s 00:11:03.356 user 0m19.757s 00:11:03.356 sys 0m5.736s 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:03.356 ************************************ 00:11:03.356 END TEST nvmf_bdev_io_wait 00:11:03.356 ************************************ 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:03.356 ************************************ 00:11:03.356 START TEST nvmf_queue_depth 00:11:03.356 ************************************ 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:03.356 * Looking for test storage... 00:11:03.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:03.356 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:03.357 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.357 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:03.357 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.357 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:03.357 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:03.357 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:11:03.357 10:59:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:08.686 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:08.686 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:08.686 Found net devices under 0000:86:00.0: cvl_0_0 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:08.686 Found net devices under 0000:86:00.1: cvl_0_1 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:08.686 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:08.686 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:11:08.686 00:11:08.686 --- 10.0.0.2 ping statistics --- 00:11:08.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.686 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:11:08.686 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:08.687 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:08.687 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.406 ms 00:11:08.687 00:11:08.687 --- 10.0.0.1 ping statistics --- 00:11:08.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.687 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:11:08.687 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:08.687 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:11:08.687 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:08.687 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:08.687 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:08.687 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:08.687 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:08.687 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:08.687 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:08.687 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:11:08.687 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:08.687 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:08.687 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:08.687 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1337504 00:11:08.687 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1337504 00:11:08.687 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1337504 ']' 00:11:08.687 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.687 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:08.687 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.687 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:08.687 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:08.687 10:59:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:08.687 [2024-07-26 10:59:28.012886] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:08.687 [2024-07-26 10:59:28.012930] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:08.687 EAL: No free 2048 kB hugepages reported on node 1 00:11:08.687 [2024-07-26 10:59:28.069717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.687 [2024-07-26 10:59:28.148790] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:08.687 [2024-07-26 10:59:28.148824] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:08.687 [2024-07-26 10:59:28.148832] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:08.687 [2024-07-26 10:59:28.148838] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:08.687 [2024-07-26 10:59:28.148843] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:08.687 [2024-07-26 10:59:28.148859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.628 10:59:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:09.628 10:59:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:11:09.628 10:59:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:09.628 10:59:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:09.628 10:59:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:09.628 10:59:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:09.628 10:59:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:09.628 10:59:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.628 10:59:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:09.628 [2024-07-26 10:59:28.848958] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:09.628 10:59:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.628 10:59:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:09.628 10:59:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.628 10:59:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:09.628 Malloc0 00:11:09.628 10:59:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.628 10:59:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:09.628 10:59:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.628 10:59:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:09.628 10:59:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.628 10:59:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:09.628 10:59:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.628 10:59:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:09.628 10:59:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.628 10:59:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:09.628 10:59:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.628 10:59:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:09.628 [2024-07-26 10:59:28.904623] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:09.628 10:59:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.628 10:59:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1337750 00:11:09.628 10:59:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:09.628 10:59:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1337750 /var/tmp/bdevperf.sock 00:11:09.628 10:59:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1337750 ']' 00:11:09.629 10:59:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:09.629 10:59:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:09.629 10:59:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:09.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:09.629 10:59:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:11:09.629 10:59:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:09.629 10:59:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:09.629 [2024-07-26 10:59:28.954265] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:09.629 [2024-07-26 10:59:28.954306] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1337750 ] 00:11:09.629 EAL: No free 2048 kB hugepages reported on node 1 00:11:09.629 [2024-07-26 10:59:29.007909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.629 [2024-07-26 10:59:29.081550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.571 10:59:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:10.571 10:59:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:11:10.571 10:59:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:11:10.571 10:59:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.572 10:59:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:10.572 NVMe0n1 00:11:10.572 10:59:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.572 10:59:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:10.572 Running I/O for 10 seconds... 00:11:20.560 00:11:20.560 Latency(us) 00:11:20.560 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:20.560 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:20.560 Verification LBA range: start 0x0 length 0x4000 00:11:20.560 NVMe0n1 : 10.05 12054.02 47.09 0.00 0.00 84651.54 9402.99 67473.59 00:11:20.560 =================================================================================================================== 00:11:20.560 Total : 12054.02 47.09 0.00 0.00 84651.54 9402.99 67473.59 00:11:20.560 0 00:11:20.560 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1337750 00:11:20.560 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1337750 ']' 00:11:20.560 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1337750 00:11:20.560 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:11:20.560 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:20.560 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1337750 00:11:20.821 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:20.821 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:20.821 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1337750' 00:11:20.821 killing process with pid 1337750 00:11:20.821 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1337750 00:11:20.821 Received shutdown signal, test time was about 10.000000 seconds 00:11:20.821 00:11:20.821 Latency(us) 00:11:20.821 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:20.821 =================================================================================================================== 00:11:20.821 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:20.821 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1337750 00:11:20.821 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:20.821 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:20.821 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:20.821 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:11:20.821 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:20.821 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:11:20.821 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:20.821 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:20.821 rmmod nvme_tcp 00:11:20.821 rmmod nvme_fabrics 00:11:21.082 rmmod nvme_keyring 00:11:21.082 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:21.082 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:11:21.082 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:11:21.082 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1337504 ']' 00:11:21.082 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1337504 00:11:21.082 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1337504 ']' 00:11:21.082 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1337504 00:11:21.082 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:11:21.082 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:21.082 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1337504 00:11:21.082 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:21.082 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:21.082 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1337504' 00:11:21.082 killing process with pid 1337504 00:11:21.082 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1337504 00:11:21.082 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1337504 00:11:21.342 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:21.342 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:21.342 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:21.342 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:21.342 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:21.342 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.342 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:21.342 10:59:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.253 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:23.253 00:11:23.253 real 0m20.245s 00:11:23.253 user 0m24.765s 00:11:23.253 sys 0m5.648s 00:11:23.253 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:23.253 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:23.253 ************************************ 00:11:23.253 END TEST nvmf_queue_depth 00:11:23.253 ************************************ 00:11:23.253 10:59:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:23.253 10:59:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:23.253 10:59:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:23.253 10:59:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:23.253 ************************************ 00:11:23.253 START TEST nvmf_target_multipath 00:11:23.253 ************************************ 00:11:23.253 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:23.513 * Looking for test storage... 00:11:23.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:23.513 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:23.513 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:23.513 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:23.513 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:23.513 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:23.513 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:23.513 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:23.513 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:23.513 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:23.514 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:23.514 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:23.514 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:23.514 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:23.514 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:23.514 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:23.514 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:23.514 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:23.514 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:23.514 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:23.514 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:23.514 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:23.514 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:23.514 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.514 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.514 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.514 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:23.514 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.514 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:11:23.514 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:23.514 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:23.514 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:23.514 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:23.514 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:23.514 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:23.514 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:23.514 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:23.514 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:23.514 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:23.514 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:23.514 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:23.514 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:23.514 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:23.514 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:23.514 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:23.514 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:23.514 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:23.514 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.514 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:23.514 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.514 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:23.514 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:23.514 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:11:23.514 10:59:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:28.904 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:28.904 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:11:28.904 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:28.904 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:28.904 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:28.904 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:28.904 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:28.904 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:11:28.904 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:28.904 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:11:28.904 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:11:28.904 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:11:28.904 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:11:28.904 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:11:28.904 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:11:28.904 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:28.904 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:28.904 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:28.904 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:28.904 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:28.904 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:28.904 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:28.904 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:28.904 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:28.904 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:28.904 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:28.904 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:28.904 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:28.904 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:28.904 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:28.904 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:28.904 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:28.904 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:28.904 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:28.904 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:28.904 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:28.904 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:28.904 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.904 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.904 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:28.904 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:28.905 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:28.905 Found net devices under 0000:86:00.0: cvl_0_0 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:28.905 Found net devices under 0000:86:00.1: cvl_0_1 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:28.905 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:29.166 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:29.166 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:29.166 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:29.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:29.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:11:29.166 00:11:29.166 --- 10.0.0.2 ping statistics --- 00:11:29.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.166 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:11:29.166 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:29.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:29.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.403 ms 00:11:29.166 00:11:29.166 --- 10.0.0.1 ping statistics --- 00:11:29.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.166 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:11:29.166 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:29.166 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:11:29.166 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:29.166 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:29.166 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:29.166 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:29.166 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:29.166 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:29.166 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:29.166 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:11:29.166 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:11:29.166 only one NIC for nvmf test 00:11:29.166 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:11:29.166 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:29.166 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:11:29.166 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:29.166 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:11:29.166 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:29.166 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:29.166 rmmod nvme_tcp 00:11:29.166 rmmod nvme_fabrics 00:11:29.166 rmmod nvme_keyring 00:11:29.166 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:29.166 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:11:29.166 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:11:29.166 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:11:29.166 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:29.166 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:29.166 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:29.166 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:29.166 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:29.167 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.167 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:29.167 10:59:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.112 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:31.112 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:11:31.112 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:11:31.112 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:31.112 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:11:31.112 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:31.112 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:11:31.112 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:31.112 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:31.112 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:31.112 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:11:31.112 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:11:31.112 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:11:31.112 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:31.112 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:31.112 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:31.112 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:31.112 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:31.112 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.112 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:31.112 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.112 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:31.112 00:11:31.112 real 0m7.803s 00:11:31.112 user 0m1.539s 00:11:31.113 sys 0m4.248s 00:11:31.113 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:31.113 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:31.113 ************************************ 00:11:31.113 END TEST nvmf_target_multipath 00:11:31.113 ************************************ 00:11:31.113 10:59:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:31.113 10:59:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:31.113 10:59:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:31.113 10:59:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:31.113 ************************************ 00:11:31.113 START TEST nvmf_zcopy 00:11:31.113 ************************************ 00:11:31.113 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:31.373 * Looking for test storage... 00:11:31.373 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:31.373 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:31.373 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:31.373 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:31.373 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:31.373 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:31.373 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:31.373 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:31.373 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:31.373 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:31.373 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:31.373 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:31.373 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:31.373 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:31.373 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:31.373 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:31.373 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:31.373 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:31.373 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:31.373 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:31.374 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:31.374 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:31.374 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:31.374 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.374 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.374 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.374 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:31.374 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.374 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:11:31.374 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:31.374 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:31.374 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:31.374 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:31.374 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:31.374 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:31.374 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:31.374 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:31.374 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:31.374 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:31.374 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:31.374 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:31.374 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:31.374 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:31.374 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.374 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:31.374 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.374 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:31.374 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:31.374 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:11:31.374 10:59:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:36.658 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:36.658 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:11:36.658 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:36.658 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:36.658 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:36.658 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:36.658 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:36.658 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:11:36.658 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:36.658 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:11:36.658 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:11:36.658 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:11:36.658 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:11:36.658 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:11:36.658 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:11:36.658 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:36.658 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:36.658 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:36.658 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:36.658 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:36.658 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:36.658 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:36.658 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:36.658 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:36.658 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:36.658 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:36.658 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:36.658 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:36.658 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:36.658 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:36.658 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:36.658 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:36.658 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:36.658 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:36.658 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:36.658 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:36.658 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:36.658 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:36.658 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:36.658 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:36.659 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:36.659 Found net devices under 0000:86:00.0: cvl_0_0 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:36.659 Found net devices under 0000:86:00.1: cvl_0_1 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:36.659 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:36.920 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:36.920 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:36.920 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:36.920 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:36.920 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:36.920 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:36.920 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:36.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:36.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:11:36.920 00:11:36.920 --- 10.0.0.2 ping statistics --- 00:11:36.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.920 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:11:36.920 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:36.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:36.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:11:36.920 00:11:36.920 --- 10.0.0.1 ping statistics --- 00:11:36.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.920 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:11:36.920 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:36.920 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:11:36.920 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:36.920 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:36.920 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:36.920 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:36.920 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:36.920 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:36.920 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:36.920 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:36.920 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:36.920 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:36.920 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:36.920 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1346624 00:11:36.920 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1346624 00:11:36.920 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:36.920 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 1346624 ']' 00:11:36.920 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.920 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:36.920 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.920 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:36.920 10:59:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:36.920 [2024-07-26 10:59:56.391685] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:36.920 [2024-07-26 10:59:56.391729] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:37.181 EAL: No free 2048 kB hugepages reported on node 1 00:11:37.181 [2024-07-26 10:59:56.451605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.181 [2024-07-26 10:59:56.523142] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:37.181 [2024-07-26 10:59:56.523184] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:37.181 [2024-07-26 10:59:56.523191] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:37.181 [2024-07-26 10:59:56.523198] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:37.181 [2024-07-26 10:59:56.523203] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:37.181 [2024-07-26 10:59:56.523220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:37.750 10:59:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:37.750 10:59:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:11:37.750 10:59:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:37.750 10:59:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:37.750 10:59:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:37.750 10:59:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:37.750 10:59:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:11:37.750 10:59:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:11:37.750 10:59:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.750 10:59:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:37.750 [2024-07-26 10:59:57.226421] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:37.750 10:59:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.750 10:59:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:37.750 10:59:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.750 10:59:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:37.750 10:59:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.750 10:59:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:37.750 10:59:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.750 10:59:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:37.750 [2024-07-26 10:59:57.242586] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:37.750 10:59:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.750 10:59:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:37.750 10:59:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.750 10:59:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:38.011 10:59:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.011 10:59:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:11:38.011 10:59:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.011 10:59:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:38.011 malloc0 00:11:38.011 10:59:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.011 10:59:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:38.011 10:59:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.011 10:59:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:38.011 10:59:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.011 10:59:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:11:38.011 10:59:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:11:38.011 10:59:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:11:38.011 10:59:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:11:38.011 10:59:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:38.011 10:59:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:38.011 { 00:11:38.011 "params": { 00:11:38.011 "name": "Nvme$subsystem", 00:11:38.011 "trtype": "$TEST_TRANSPORT", 00:11:38.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:38.011 "adrfam": "ipv4", 00:11:38.011 "trsvcid": "$NVMF_PORT", 00:11:38.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:38.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:38.011 "hdgst": ${hdgst:-false}, 00:11:38.011 "ddgst": ${ddgst:-false} 00:11:38.011 }, 00:11:38.011 "method": "bdev_nvme_attach_controller" 00:11:38.011 } 00:11:38.011 EOF 00:11:38.011 )") 00:11:38.011 10:59:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:11:38.011 10:59:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:11:38.011 10:59:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:11:38.011 10:59:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:38.011 "params": { 00:11:38.011 "name": "Nvme1", 00:11:38.011 "trtype": "tcp", 00:11:38.011 "traddr": "10.0.0.2", 00:11:38.011 "adrfam": "ipv4", 00:11:38.011 "trsvcid": "4420", 00:11:38.011 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:38.011 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:38.011 "hdgst": false, 00:11:38.011 "ddgst": false 00:11:38.011 }, 00:11:38.011 "method": "bdev_nvme_attach_controller" 00:11:38.011 }' 00:11:38.011 [2024-07-26 10:59:57.336585] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:38.011 [2024-07-26 10:59:57.336630] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1346675 ] 00:11:38.011 EAL: No free 2048 kB hugepages reported on node 1 00:11:38.011 [2024-07-26 10:59:57.390807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.011 [2024-07-26 10:59:57.464747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.272 Running I/O for 10 seconds... 00:11:48.261 00:11:48.261 Latency(us) 00:11:48.261 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:48.261 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:48.261 Verification LBA range: start 0x0 length 0x1000 00:11:48.261 Nvme1n1 : 10.01 7503.59 58.62 0.00 0.00 17017.08 1282.23 49237.48 00:11:48.261 =================================================================================================================== 00:11:48.261 Total : 7503.59 58.62 0.00 0.00 17017.08 1282.23 49237.48 00:11:48.520 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1348607 00:11:48.520 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:11:48.520 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:48.520 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:48.520 [2024-07-26 11:00:07.890969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.520 [2024-07-26 11:00:07.891003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.520 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:48.520 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:11:48.520 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:11:48.520 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:48.520 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:48.520 { 00:11:48.520 "params": { 00:11:48.520 "name": "Nvme$subsystem", 00:11:48.520 "trtype": "$TEST_TRANSPORT", 00:11:48.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:48.520 "adrfam": "ipv4", 00:11:48.520 "trsvcid": "$NVMF_PORT", 00:11:48.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:48.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:48.520 "hdgst": ${hdgst:-false}, 00:11:48.520 "ddgst": ${ddgst:-false} 00:11:48.520 }, 00:11:48.520 "method": "bdev_nvme_attach_controller" 00:11:48.520 } 00:11:48.520 EOF 00:11:48.520 )") 00:11:48.520 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:11:48.520 [2024-07-26 11:00:07.898957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.520 [2024-07-26 11:00:07.898970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.520 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:11:48.520 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:11:48.520 11:00:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:48.520 "params": { 00:11:48.520 "name": "Nvme1", 00:11:48.520 "trtype": "tcp", 00:11:48.520 "traddr": "10.0.0.2", 00:11:48.520 "adrfam": "ipv4", 00:11:48.520 "trsvcid": "4420", 00:11:48.520 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:48.520 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:48.520 "hdgst": false, 00:11:48.520 "ddgst": false 00:11:48.520 }, 00:11:48.520 "method": "bdev_nvme_attach_controller" 00:11:48.520 }' 00:11:48.520 [2024-07-26 11:00:07.906973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.520 [2024-07-26 11:00:07.906983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.520 [2024-07-26 11:00:07.914994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.520 [2024-07-26 11:00:07.915008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.520 [2024-07-26 11:00:07.923015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.520 [2024-07-26 11:00:07.923024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.520 [2024-07-26 11:00:07.931037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.521 [2024-07-26 11:00:07.931051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.521 [2024-07-26 11:00:07.933960] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:48.521 [2024-07-26 11:00:07.934000] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1348607 ] 00:11:48.521 [2024-07-26 11:00:07.939062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.521 [2024-07-26 11:00:07.939072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.521 [2024-07-26 11:00:07.947085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.521 [2024-07-26 11:00:07.947094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.521 [2024-07-26 11:00:07.955102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.521 [2024-07-26 11:00:07.955111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.521 EAL: No free 2048 kB hugepages reported on node 1 00:11:48.521 [2024-07-26 11:00:07.963123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.521 [2024-07-26 11:00:07.963132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.521 [2024-07-26 11:00:07.971143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.521 [2024-07-26 11:00:07.971152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.521 [2024-07-26 11:00:07.979164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.521 [2024-07-26 11:00:07.979175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.521 [2024-07-26 11:00:07.987186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.521 [2024-07-26 11:00:07.987196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.521 [2024-07-26 11:00:07.987693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.521 [2024-07-26 11:00:07.995206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.521 [2024-07-26 11:00:07.995217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.521 [2024-07-26 11:00:08.003225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.521 [2024-07-26 11:00:08.003236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.521 [2024-07-26 11:00:08.011246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.521 [2024-07-26 11:00:08.011254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.781 [2024-07-26 11:00:08.019269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.781 [2024-07-26 11:00:08.019280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.781 [2024-07-26 11:00:08.027292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.781 [2024-07-26 11:00:08.027306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.781 [2024-07-26 11:00:08.035315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.781 [2024-07-26 11:00:08.035328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.781 [2024-07-26 11:00:08.043334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.781 [2024-07-26 11:00:08.043343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.781 [2024-07-26 11:00:08.051356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.781 [2024-07-26 11:00:08.051365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.781 [2024-07-26 11:00:08.059376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.781 [2024-07-26 11:00:08.059386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.781 [2024-07-26 11:00:08.065928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.781 [2024-07-26 11:00:08.067409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.781 [2024-07-26 11:00:08.067419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.781 [2024-07-26 11:00:08.075422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.781 [2024-07-26 11:00:08.075433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.781 [2024-07-26 11:00:08.083449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.781 [2024-07-26 11:00:08.083468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.781 [2024-07-26 11:00:08.091465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.781 [2024-07-26 11:00:08.091476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.781 [2024-07-26 11:00:08.099484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.781 [2024-07-26 11:00:08.099495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.781 [2024-07-26 11:00:08.107506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.781 [2024-07-26 11:00:08.107517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.781 [2024-07-26 11:00:08.115526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.781 [2024-07-26 11:00:08.115536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.781 [2024-07-26 11:00:08.123550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.781 [2024-07-26 11:00:08.123561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.781 [2024-07-26 11:00:08.131570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.781 [2024-07-26 11:00:08.131580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.781 [2024-07-26 11:00:08.139590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.781 [2024-07-26 11:00:08.139598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.781 [2024-07-26 11:00:08.147610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.781 [2024-07-26 11:00:08.147619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.781 [2024-07-26 11:00:08.155654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.781 [2024-07-26 11:00:08.155674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.781 [2024-07-26 11:00:08.163660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.781 [2024-07-26 11:00:08.163672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.781 [2024-07-26 11:00:08.171681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.781 [2024-07-26 11:00:08.171692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.781 [2024-07-26 11:00:08.179704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.781 [2024-07-26 11:00:08.179718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.781 [2024-07-26 11:00:08.187724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.781 [2024-07-26 11:00:08.187733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.781 [2024-07-26 11:00:08.195743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.781 [2024-07-26 11:00:08.195756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.781 [2024-07-26 11:00:08.203767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.781 [2024-07-26 11:00:08.203777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.781 [2024-07-26 11:00:08.211789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.781 [2024-07-26 11:00:08.211799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.782 [2024-07-26 11:00:08.219813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.782 [2024-07-26 11:00:08.219825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.782 [2024-07-26 11:00:08.227836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.782 [2024-07-26 11:00:08.227850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.782 [2024-07-26 11:00:08.235858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.782 [2024-07-26 11:00:08.235870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.782 [2024-07-26 11:00:08.243878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.782 [2024-07-26 11:00:08.243888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.782 [2024-07-26 11:00:08.251901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.782 [2024-07-26 11:00:08.251911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.782 [2024-07-26 11:00:08.259926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.782 [2024-07-26 11:00:08.259935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.782 [2024-07-26 11:00:08.267948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.782 [2024-07-26 11:00:08.267958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.782 [2024-07-26 11:00:08.275971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:48.782 [2024-07-26 11:00:08.275982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.042 [2024-07-26 11:00:08.283997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.042 [2024-07-26 11:00:08.284012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.042 [2024-07-26 11:00:08.292015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.042 [2024-07-26 11:00:08.292025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.042 [2024-07-26 11:00:08.300037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.042 [2024-07-26 11:00:08.300052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.042 [2024-07-26 11:00:08.308066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.042 [2024-07-26 11:00:08.308075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.042 [2024-07-26 11:00:08.316089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.042 [2024-07-26 11:00:08.316098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.042 [2024-07-26 11:00:08.324108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.042 [2024-07-26 11:00:08.324119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.042 [2024-07-26 11:00:08.332130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.042 [2024-07-26 11:00:08.332139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.042 [2024-07-26 11:00:08.340160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.042 [2024-07-26 11:00:08.340177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.042 [2024-07-26 11:00:08.348175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.042 [2024-07-26 11:00:08.348189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.042 Running I/O for 5 seconds... 00:11:49.042 [2024-07-26 11:00:08.375262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.042 [2024-07-26 11:00:08.375283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.042 [2024-07-26 11:00:08.388126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.042 [2024-07-26 11:00:08.388145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.042 [2024-07-26 11:00:08.396743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.042 [2024-07-26 11:00:08.396761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.042 [2024-07-26 11:00:08.407222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.042 [2024-07-26 11:00:08.407251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.042 [2024-07-26 11:00:08.415139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.042 [2024-07-26 11:00:08.415156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.042 [2024-07-26 11:00:08.422375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.042 [2024-07-26 11:00:08.422392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.042 [2024-07-26 11:00:08.431528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.042 [2024-07-26 11:00:08.431546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.042 [2024-07-26 11:00:08.442838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.042 [2024-07-26 11:00:08.442856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.042 [2024-07-26 11:00:08.453361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.042 [2024-07-26 11:00:08.453379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.042 [2024-07-26 11:00:08.462818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.042 [2024-07-26 11:00:08.462836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.042 [2024-07-26 11:00:08.473580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.042 [2024-07-26 11:00:08.473598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.042 [2024-07-26 11:00:08.484855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.042 [2024-07-26 11:00:08.484874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.042 [2024-07-26 11:00:08.492456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.042 [2024-07-26 11:00:08.492474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.042 [2024-07-26 11:00:08.504027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.042 [2024-07-26 11:00:08.504050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.042 [2024-07-26 11:00:08.514894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.042 [2024-07-26 11:00:08.514913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.042 [2024-07-26 11:00:08.524308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.042 [2024-07-26 11:00:08.524327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.042 [2024-07-26 11:00:08.533714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.042 [2024-07-26 11:00:08.533731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.305 [2024-07-26 11:00:08.541765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.305 [2024-07-26 11:00:08.541782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.305 [2024-07-26 11:00:08.551369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.305 [2024-07-26 11:00:08.551391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.305 [2024-07-26 11:00:08.559234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.305 [2024-07-26 11:00:08.559252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.305 [2024-07-26 11:00:08.567414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.305 [2024-07-26 11:00:08.567432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.305 [2024-07-26 11:00:08.575799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.305 [2024-07-26 11:00:08.575816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.305 [2024-07-26 11:00:08.585378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.305 [2024-07-26 11:00:08.585396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.305 [2024-07-26 11:00:08.593178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.305 [2024-07-26 11:00:08.593195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.305 [2024-07-26 11:00:08.602745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.305 [2024-07-26 11:00:08.602763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.305 [2024-07-26 11:00:08.611856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.305 [2024-07-26 11:00:08.611873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.305 [2024-07-26 11:00:08.620754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.305 [2024-07-26 11:00:08.620772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.305 [2024-07-26 11:00:08.630586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.305 [2024-07-26 11:00:08.630606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.305 [2024-07-26 11:00:08.640798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.305 [2024-07-26 11:00:08.640815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.305 [2024-07-26 11:00:08.649635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.305 [2024-07-26 11:00:08.649653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.305 [2024-07-26 11:00:08.657456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.305 [2024-07-26 11:00:08.657473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.305 [2024-07-26 11:00:08.668414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.305 [2024-07-26 11:00:08.668432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.305 [2024-07-26 11:00:08.675803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.305 [2024-07-26 11:00:08.675821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.305 [2024-07-26 11:00:08.683467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.305 [2024-07-26 11:00:08.683485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.305 [2024-07-26 11:00:08.692626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.305 [2024-07-26 11:00:08.692643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.305 [2024-07-26 11:00:08.700149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.305 [2024-07-26 11:00:08.700167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.305 [2024-07-26 11:00:08.710958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.305 [2024-07-26 11:00:08.710976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.305 [2024-07-26 11:00:08.718524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.305 [2024-07-26 11:00:08.718542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.305 [2024-07-26 11:00:08.728367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.305 [2024-07-26 11:00:08.728385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.305 [2024-07-26 11:00:08.736477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.305 [2024-07-26 11:00:08.736494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.305 [2024-07-26 11:00:08.746317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.305 [2024-07-26 11:00:08.746335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.305 [2024-07-26 11:00:08.755306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.305 [2024-07-26 11:00:08.755323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.305 [2024-07-26 11:00:08.763948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.305 [2024-07-26 11:00:08.763966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.305 [2024-07-26 11:00:08.770626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.305 [2024-07-26 11:00:08.770643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.305 [2024-07-26 11:00:08.780859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.305 [2024-07-26 11:00:08.780880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.306 [2024-07-26 11:00:08.789920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.306 [2024-07-26 11:00:08.789937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.568 [2024-07-26 11:00:08.802881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.568 [2024-07-26 11:00:08.802900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.568 [2024-07-26 11:00:08.813069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.568 [2024-07-26 11:00:08.813086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.568 [2024-07-26 11:00:08.820059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.568 [2024-07-26 11:00:08.820076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.568 [2024-07-26 11:00:08.829016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.568 [2024-07-26 11:00:08.829034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.568 [2024-07-26 11:00:08.839066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.568 [2024-07-26 11:00:08.839084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.568 [2024-07-26 11:00:08.847642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.568 [2024-07-26 11:00:08.847659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.568 [2024-07-26 11:00:08.856704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.568 [2024-07-26 11:00:08.856721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.568 [2024-07-26 11:00:08.864660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.568 [2024-07-26 11:00:08.864677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.568 [2024-07-26 11:00:08.875162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.568 [2024-07-26 11:00:08.875179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.568 [2024-07-26 11:00:08.883920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.568 [2024-07-26 11:00:08.883938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.568 [2024-07-26 11:00:08.891210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.568 [2024-07-26 11:00:08.891227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.568 [2024-07-26 11:00:08.902553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.568 [2024-07-26 11:00:08.902572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.568 [2024-07-26 11:00:08.910089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.568 [2024-07-26 11:00:08.910106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.568 [2024-07-26 11:00:08.920524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.568 [2024-07-26 11:00:08.920542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.568 [2024-07-26 11:00:08.930269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.568 [2024-07-26 11:00:08.930288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.568 [2024-07-26 11:00:08.939386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.568 [2024-07-26 11:00:08.939403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.568 [2024-07-26 11:00:08.946448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.568 [2024-07-26 11:00:08.946465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.568 [2024-07-26 11:00:08.956610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.568 [2024-07-26 11:00:08.956628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.568 [2024-07-26 11:00:08.964022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.568 [2024-07-26 11:00:08.964039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.568 [2024-07-26 11:00:08.972981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.568 [2024-07-26 11:00:08.972998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.568 [2024-07-26 11:00:08.981798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.568 [2024-07-26 11:00:08.981815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.568 [2024-07-26 11:00:08.990023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.568 [2024-07-26 11:00:08.990041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.568 [2024-07-26 11:00:08.998886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.569 [2024-07-26 11:00:08.998904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.569 [2024-07-26 11:00:09.006771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.569 [2024-07-26 11:00:09.006789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.569 [2024-07-26 11:00:09.014271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.569 [2024-07-26 11:00:09.014288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.569 [2024-07-26 11:00:09.024833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.569 [2024-07-26 11:00:09.024850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.569 [2024-07-26 11:00:09.034700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.569 [2024-07-26 11:00:09.034717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.569 [2024-07-26 11:00:09.041520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.569 [2024-07-26 11:00:09.041537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.569 [2024-07-26 11:00:09.053544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.569 [2024-07-26 11:00:09.053561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.569 [2024-07-26 11:00:09.064047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.569 [2024-07-26 11:00:09.064066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.828 [2024-07-26 11:00:09.071889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.828 [2024-07-26 11:00:09.071908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.828 [2024-07-26 11:00:09.082230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.828 [2024-07-26 11:00:09.082247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.828 [2024-07-26 11:00:09.089984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.828 [2024-07-26 11:00:09.090001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.828 [2024-07-26 11:00:09.100230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.828 [2024-07-26 11:00:09.100248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.829 [2024-07-26 11:00:09.109223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.829 [2024-07-26 11:00:09.109241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.829 [2024-07-26 11:00:09.118260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.829 [2024-07-26 11:00:09.118278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.829 [2024-07-26 11:00:09.126372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.829 [2024-07-26 11:00:09.126390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.829 [2024-07-26 11:00:09.134172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.829 [2024-07-26 11:00:09.134188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.829 [2024-07-26 11:00:09.144295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.829 [2024-07-26 11:00:09.144313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.829 [2024-07-26 11:00:09.151592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.829 [2024-07-26 11:00:09.151610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.829 [2024-07-26 11:00:09.161408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.829 [2024-07-26 11:00:09.161425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.829 [2024-07-26 11:00:09.169059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.829 [2024-07-26 11:00:09.169076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.829 [2024-07-26 11:00:09.178966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.829 [2024-07-26 11:00:09.178983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.829 [2024-07-26 11:00:09.186245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.829 [2024-07-26 11:00:09.186262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.829 [2024-07-26 11:00:09.196826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.829 [2024-07-26 11:00:09.196843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.829 [2024-07-26 11:00:09.205946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.829 [2024-07-26 11:00:09.205963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.829 [2024-07-26 11:00:09.214833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.829 [2024-07-26 11:00:09.214850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.829 [2024-07-26 11:00:09.222771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.829 [2024-07-26 11:00:09.222791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.829 [2024-07-26 11:00:09.231233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.829 [2024-07-26 11:00:09.231251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.829 [2024-07-26 11:00:09.240572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.829 [2024-07-26 11:00:09.240589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.829 [2024-07-26 11:00:09.247956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.829 [2024-07-26 11:00:09.247972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.829 [2024-07-26 11:00:09.257056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.829 [2024-07-26 11:00:09.257073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.829 [2024-07-26 11:00:09.264821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.829 [2024-07-26 11:00:09.264839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.829 [2024-07-26 11:00:09.272329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.829 [2024-07-26 11:00:09.272350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.829 [2024-07-26 11:00:09.281687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.829 [2024-07-26 11:00:09.281704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.829 [2024-07-26 11:00:09.290185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.829 [2024-07-26 11:00:09.290202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.829 [2024-07-26 11:00:09.299020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.829 [2024-07-26 11:00:09.299038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.829 [2024-07-26 11:00:09.307340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.829 [2024-07-26 11:00:09.307358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:49.829 [2024-07-26 11:00:09.317141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:49.829 [2024-07-26 11:00:09.317159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.089 [2024-07-26 11:00:09.326341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.089 [2024-07-26 11:00:09.326359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.089 [2024-07-26 11:00:09.334970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.089 [2024-07-26 11:00:09.334987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.089 [2024-07-26 11:00:09.343536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.089 [2024-07-26 11:00:09.343554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.089 [2024-07-26 11:00:09.350703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.089 [2024-07-26 11:00:09.350720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.089 [2024-07-26 11:00:09.362423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.089 [2024-07-26 11:00:09.362440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.089 [2024-07-26 11:00:09.373557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.089 [2024-07-26 11:00:09.373575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.089 [2024-07-26 11:00:09.382443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.089 [2024-07-26 11:00:09.382461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.089 [2024-07-26 11:00:09.389983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.089 [2024-07-26 11:00:09.390004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.089 [2024-07-26 11:00:09.401205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.089 [2024-07-26 11:00:09.401222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.089 [2024-07-26 11:00:09.409166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.089 [2024-07-26 11:00:09.409183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.089 [2024-07-26 11:00:09.418696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.089 [2024-07-26 11:00:09.418713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.089 [2024-07-26 11:00:09.426689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.089 [2024-07-26 11:00:09.426707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.089 [2024-07-26 11:00:09.434811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.089 [2024-07-26 11:00:09.434829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.089 [2024-07-26 11:00:09.442912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.089 [2024-07-26 11:00:09.442930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.089 [2024-07-26 11:00:09.454783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.089 [2024-07-26 11:00:09.454801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.089 [2024-07-26 11:00:09.463168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.089 [2024-07-26 11:00:09.463186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.089 [2024-07-26 11:00:09.471884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.089 [2024-07-26 11:00:09.471902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.089 [2024-07-26 11:00:09.484489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.089 [2024-07-26 11:00:09.484508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.089 [2024-07-26 11:00:09.492352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.089 [2024-07-26 11:00:09.492370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.089 [2024-07-26 11:00:09.505525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.089 [2024-07-26 11:00:09.505543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.089 [2024-07-26 11:00:09.512806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.089 [2024-07-26 11:00:09.512823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.089 [2024-07-26 11:00:09.522647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.089 [2024-07-26 11:00:09.522665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.089 [2024-07-26 11:00:09.532075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.089 [2024-07-26 11:00:09.532093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.089 [2024-07-26 11:00:09.542102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.089 [2024-07-26 11:00:09.542120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.089 [2024-07-26 11:00:09.549982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.089 [2024-07-26 11:00:09.550000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.089 [2024-07-26 11:00:09.560869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.089 [2024-07-26 11:00:09.560887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.089 [2024-07-26 11:00:09.567934] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.089 [2024-07-26 11:00:09.567954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.089 [2024-07-26 11:00:09.579968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.089 [2024-07-26 11:00:09.579986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.350 [2024-07-26 11:00:09.592349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.350 [2024-07-26 11:00:09.592367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.350 [2024-07-26 11:00:09.604908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.350 [2024-07-26 11:00:09.604926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.350 [2024-07-26 11:00:09.614692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.350 [2024-07-26 11:00:09.614710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.350 [2024-07-26 11:00:09.622900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.350 [2024-07-26 11:00:09.622918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.350 [2024-07-26 11:00:09.631582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.350 [2024-07-26 11:00:09.631600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.350 [2024-07-26 11:00:09.641025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.350 [2024-07-26 11:00:09.641050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.350 [2024-07-26 11:00:09.650239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.350 [2024-07-26 11:00:09.650257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.350 [2024-07-26 11:00:09.657977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.350 [2024-07-26 11:00:09.657995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.350 [2024-07-26 11:00:09.667905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.350 [2024-07-26 11:00:09.667924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.350 [2024-07-26 11:00:09.677185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.350 [2024-07-26 11:00:09.677203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.350 [2024-07-26 11:00:09.684641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.350 [2024-07-26 11:00:09.684659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.350 [2024-07-26 11:00:09.693977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.350 [2024-07-26 11:00:09.693996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.350 [2024-07-26 11:00:09.701291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.350 [2024-07-26 11:00:09.701310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.350 [2024-07-26 11:00:09.709295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.350 [2024-07-26 11:00:09.709312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.350 [2024-07-26 11:00:09.720781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.350 [2024-07-26 11:00:09.720799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.350 [2024-07-26 11:00:09.731995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.350 [2024-07-26 11:00:09.732013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.350 [2024-07-26 11:00:09.741568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.350 [2024-07-26 11:00:09.741585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.350 [2024-07-26 11:00:09.748825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.350 [2024-07-26 11:00:09.748847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.350 [2024-07-26 11:00:09.761053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.350 [2024-07-26 11:00:09.761071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.350 [2024-07-26 11:00:09.771321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.350 [2024-07-26 11:00:09.771339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.350 [2024-07-26 11:00:09.778967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.350 [2024-07-26 11:00:09.778985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.350 [2024-07-26 11:00:09.787531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.350 [2024-07-26 11:00:09.787548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.350 [2024-07-26 11:00:09.796834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.350 [2024-07-26 11:00:09.796852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.350 [2024-07-26 11:00:09.805287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.350 [2024-07-26 11:00:09.805305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.350 [2024-07-26 11:00:09.815239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.350 [2024-07-26 11:00:09.815257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.350 [2024-07-26 11:00:09.823933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.350 [2024-07-26 11:00:09.823950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.350 [2024-07-26 11:00:09.832901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.350 [2024-07-26 11:00:09.832918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.350 [2024-07-26 11:00:09.840415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.350 [2024-07-26 11:00:09.840432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.610 [2024-07-26 11:00:09.850259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.610 [2024-07-26 11:00:09.850277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.610 [2024-07-26 11:00:09.858818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.610 [2024-07-26 11:00:09.858835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.610 [2024-07-26 11:00:09.867547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.610 [2024-07-26 11:00:09.867564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.610 [2024-07-26 11:00:09.875120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.610 [2024-07-26 11:00:09.875138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.610 [2024-07-26 11:00:09.884773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.610 [2024-07-26 11:00:09.884791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.610 [2024-07-26 11:00:09.892560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.610 [2024-07-26 11:00:09.892578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.610 [2024-07-26 11:00:09.899967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.610 [2024-07-26 11:00:09.899985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.610 [2024-07-26 11:00:09.911232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.610 [2024-07-26 11:00:09.911261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.610 [2024-07-26 11:00:09.920461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.610 [2024-07-26 11:00:09.920484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.610 [2024-07-26 11:00:09.929878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.610 [2024-07-26 11:00:09.929895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.610 [2024-07-26 11:00:09.939323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.610 [2024-07-26 11:00:09.939340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.610 [2024-07-26 11:00:09.948332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.610 [2024-07-26 11:00:09.948350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.610 [2024-07-26 11:00:09.957913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.610 [2024-07-26 11:00:09.957931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.610 [2024-07-26 11:00:09.967479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.610 [2024-07-26 11:00:09.967496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.610 [2024-07-26 11:00:09.974809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.610 [2024-07-26 11:00:09.974826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.610 [2024-07-26 11:00:09.984814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.610 [2024-07-26 11:00:09.984832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.610 [2024-07-26 11:00:09.993949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.610 [2024-07-26 11:00:09.993966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.610 [2024-07-26 11:00:10.001814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.610 [2024-07-26 11:00:10.001833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.610 [2024-07-26 11:00:10.011448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.610 [2024-07-26 11:00:10.011472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.610 [2024-07-26 11:00:10.020826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.610 [2024-07-26 11:00:10.020845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.610 [2024-07-26 11:00:10.029145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.610 [2024-07-26 11:00:10.029164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.610 [2024-07-26 11:00:10.040140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.610 [2024-07-26 11:00:10.040159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.610 [2024-07-26 11:00:10.048976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.610 [2024-07-26 11:00:10.048993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.610 [2024-07-26 11:00:10.058725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.610 [2024-07-26 11:00:10.058742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.610 [2024-07-26 11:00:10.066574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.610 [2024-07-26 11:00:10.066595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.611 [2024-07-26 11:00:10.079126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.611 [2024-07-26 11:00:10.079145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.611 [2024-07-26 11:00:10.091038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.611 [2024-07-26 11:00:10.091062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.611 [2024-07-26 11:00:10.098998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.611 [2024-07-26 11:00:10.099016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.870 [2024-07-26 11:00:10.109666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.870 [2024-07-26 11:00:10.109686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.870 [2024-07-26 11:00:10.118663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.870 [2024-07-26 11:00:10.118681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.870 [2024-07-26 11:00:10.126638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.870 [2024-07-26 11:00:10.126655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.870 [2024-07-26 11:00:10.138762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.870 [2024-07-26 11:00:10.138780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.870 [2024-07-26 11:00:10.148095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.870 [2024-07-26 11:00:10.148112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.870 [2024-07-26 11:00:10.157977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.870 [2024-07-26 11:00:10.157995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.870 [2024-07-26 11:00:10.165910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.870 [2024-07-26 11:00:10.165927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.870 [2024-07-26 11:00:10.174752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.870 [2024-07-26 11:00:10.174769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.870 [2024-07-26 11:00:10.184435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.870 [2024-07-26 11:00:10.184452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.870 [2024-07-26 11:00:10.192831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.871 [2024-07-26 11:00:10.192848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.871 [2024-07-26 11:00:10.200499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.871 [2024-07-26 11:00:10.200516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.871 [2024-07-26 11:00:10.209722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.871 [2024-07-26 11:00:10.209739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.871 [2024-07-26 11:00:10.219360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.871 [2024-07-26 11:00:10.219377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.871 [2024-07-26 11:00:10.228112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.871 [2024-07-26 11:00:10.228130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.871 [2024-07-26 11:00:10.236893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.871 [2024-07-26 11:00:10.236910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.871 [2024-07-26 11:00:10.243902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.871 [2024-07-26 11:00:10.243919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.871 [2024-07-26 11:00:10.254704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.871 [2024-07-26 11:00:10.254722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.871 [2024-07-26 11:00:10.262578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.871 [2024-07-26 11:00:10.262596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.871 [2024-07-26 11:00:10.271931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.871 [2024-07-26 11:00:10.271948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.871 [2024-07-26 11:00:10.279763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.871 [2024-07-26 11:00:10.279780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.871 [2024-07-26 11:00:10.287517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.871 [2024-07-26 11:00:10.287534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.871 [2024-07-26 11:00:10.297003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.871 [2024-07-26 11:00:10.297020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.871 [2024-07-26 11:00:10.305200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.871 [2024-07-26 11:00:10.305218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.871 [2024-07-26 11:00:10.314256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.871 [2024-07-26 11:00:10.314273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.871 [2024-07-26 11:00:10.323541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.871 [2024-07-26 11:00:10.323558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.871 [2024-07-26 11:00:10.331919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.871 [2024-07-26 11:00:10.331936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.871 [2024-07-26 11:00:10.339182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.871 [2024-07-26 11:00:10.339200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.871 [2024-07-26 11:00:10.349724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.871 [2024-07-26 11:00:10.349743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:50.871 [2024-07-26 11:00:10.358553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:50.871 [2024-07-26 11:00:10.358571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.131 [2024-07-26 11:00:10.367668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.131 [2024-07-26 11:00:10.367687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.131 [2024-07-26 11:00:10.376827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.131 [2024-07-26 11:00:10.376845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.131 [2024-07-26 11:00:10.385304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.131 [2024-07-26 11:00:10.385321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.131 [2024-07-26 11:00:10.394904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.131 [2024-07-26 11:00:10.394922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.131 [2024-07-26 11:00:10.402359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.131 [2024-07-26 11:00:10.402377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.131 [2024-07-26 11:00:10.411425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.131 [2024-07-26 11:00:10.411443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.131 [2024-07-26 11:00:10.420294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.131 [2024-07-26 11:00:10.420311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.131 [2024-07-26 11:00:10.470184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.131 [2024-07-26 11:00:10.470202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.131 [2024-07-26 11:00:10.487905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.131 [2024-07-26 11:00:10.487924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.131 [2024-07-26 11:00:10.498819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.131 [2024-07-26 11:00:10.498836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.131 [2024-07-26 11:00:10.507365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.131 [2024-07-26 11:00:10.507383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.131 [2024-07-26 11:00:10.517526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.131 [2024-07-26 11:00:10.517544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.131 [2024-07-26 11:00:10.526729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.131 [2024-07-26 11:00:10.526747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.131 [2024-07-26 11:00:10.535459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.131 [2024-07-26 11:00:10.535476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.131 [2024-07-26 11:00:10.544207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.131 [2024-07-26 11:00:10.544225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.131 [2024-07-26 11:00:10.551638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.131 [2024-07-26 11:00:10.551656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.131 [2024-07-26 11:00:10.561131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.131 [2024-07-26 11:00:10.561149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.131 [2024-07-26 11:00:10.569102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.131 [2024-07-26 11:00:10.569120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.131 [2024-07-26 11:00:10.576693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.131 [2024-07-26 11:00:10.576710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.131 [2024-07-26 11:00:10.587157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.131 [2024-07-26 11:00:10.587174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.131 [2024-07-26 11:00:10.595882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.131 [2024-07-26 11:00:10.595900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.131 [2024-07-26 11:00:10.605132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.131 [2024-07-26 11:00:10.605150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.132 [2024-07-26 11:00:10.616500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.132 [2024-07-26 11:00:10.616517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.132 [2024-07-26 11:00:10.626501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.132 [2024-07-26 11:00:10.626518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.392 [2024-07-26 11:00:10.634211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.392 [2024-07-26 11:00:10.634228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.392 [2024-07-26 11:00:10.644267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.392 [2024-07-26 11:00:10.644285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.392 [2024-07-26 11:00:10.651701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.392 [2024-07-26 11:00:10.651727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.392 [2024-07-26 11:00:10.661851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.392 [2024-07-26 11:00:10.661868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.392 [2024-07-26 11:00:10.671535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.392 [2024-07-26 11:00:10.671552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.392 [2024-07-26 11:00:10.678259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.392 [2024-07-26 11:00:10.678276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.392 [2024-07-26 11:00:10.689682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.392 [2024-07-26 11:00:10.689700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.392 [2024-07-26 11:00:10.698761] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.392 [2024-07-26 11:00:10.698779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.392 [2024-07-26 11:00:10.707358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.392 [2024-07-26 11:00:10.707376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.392 [2024-07-26 11:00:10.715833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.392 [2024-07-26 11:00:10.715851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.392 [2024-07-26 11:00:10.727519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.392 [2024-07-26 11:00:10.727536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.392 [2024-07-26 11:00:10.737195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.392 [2024-07-26 11:00:10.737212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.392 [2024-07-26 11:00:10.744881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.392 [2024-07-26 11:00:10.744898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.392 [2024-07-26 11:00:10.752634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.392 [2024-07-26 11:00:10.752651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.392 [2024-07-26 11:00:10.761849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.392 [2024-07-26 11:00:10.761866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.392 [2024-07-26 11:00:10.769716] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.392 [2024-07-26 11:00:10.769734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.392 [2024-07-26 11:00:10.779830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.392 [2024-07-26 11:00:10.779848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.392 [2024-07-26 11:00:10.787992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.392 [2024-07-26 11:00:10.788010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.392 [2024-07-26 11:00:10.796973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.392 [2024-07-26 11:00:10.796990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.392 [2024-07-26 11:00:10.803970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.392 [2024-07-26 11:00:10.803986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.392 [2024-07-26 11:00:10.813592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.392 [2024-07-26 11:00:10.813609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.393 [2024-07-26 11:00:10.823074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.393 [2024-07-26 11:00:10.823094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.393 [2024-07-26 11:00:10.871852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.393 [2024-07-26 11:00:10.871869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.393 [2024-07-26 11:00:10.888680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.393 [2024-07-26 11:00:10.888697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.651 [2024-07-26 11:00:10.902879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.651 [2024-07-26 11:00:10.902896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.651 [2024-07-26 11:00:10.910910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.651 [2024-07-26 11:00:10.910928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.651 [2024-07-26 11:00:10.918225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.651 [2024-07-26 11:00:10.918242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.651 [2024-07-26 11:00:10.930003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.651 [2024-07-26 11:00:10.930020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.651 [2024-07-26 11:00:10.939841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.651 [2024-07-26 11:00:10.939858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.651 [2024-07-26 11:00:10.948520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.651 [2024-07-26 11:00:10.948537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.651 [2024-07-26 11:00:10.958014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.651 [2024-07-26 11:00:10.958030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.651 [2024-07-26 11:00:10.971719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.651 [2024-07-26 11:00:10.971736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.651 [2024-07-26 11:00:10.981856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.652 [2024-07-26 11:00:10.981874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.652 [2024-07-26 11:00:10.990724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.652 [2024-07-26 11:00:10.990742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.652 [2024-07-26 11:00:10.997909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.652 [2024-07-26 11:00:10.997926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.652 [2024-07-26 11:00:11.008770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.652 [2024-07-26 11:00:11.008787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.652 [2024-07-26 11:00:11.017703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.652 [2024-07-26 11:00:11.017720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.652 [2024-07-26 11:00:11.026441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.652 [2024-07-26 11:00:11.026458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.652 [2024-07-26 11:00:11.034994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.652 [2024-07-26 11:00:11.035012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.652 [2024-07-26 11:00:11.042517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.652 [2024-07-26 11:00:11.042536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.652 [2024-07-26 11:00:11.054685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.652 [2024-07-26 11:00:11.054707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.652 [2024-07-26 11:00:11.064243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.652 [2024-07-26 11:00:11.064261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.652 [2024-07-26 11:00:11.073061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.652 [2024-07-26 11:00:11.073080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.652 [2024-07-26 11:00:11.080816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.652 [2024-07-26 11:00:11.080834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.652 [2024-07-26 11:00:11.090599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.652 [2024-07-26 11:00:11.090618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.652 [2024-07-26 11:00:11.098457] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.652 [2024-07-26 11:00:11.098476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.652 [2024-07-26 11:00:11.108830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.652 [2024-07-26 11:00:11.108848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.652 [2024-07-26 11:00:11.118131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.652 [2024-07-26 11:00:11.118150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.652 [2024-07-26 11:00:11.128481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.652 [2024-07-26 11:00:11.128499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.652 [2024-07-26 11:00:11.137718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.652 [2024-07-26 11:00:11.137735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.652 [2024-07-26 11:00:11.146219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.652 [2024-07-26 11:00:11.146236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.911 [2024-07-26 11:00:11.156093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.911 [2024-07-26 11:00:11.156112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.911 [2024-07-26 11:00:11.163441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.911 [2024-07-26 11:00:11.163459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.911 [2024-07-26 11:00:11.174259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.911 [2024-07-26 11:00:11.174277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.911 [2024-07-26 11:00:11.183678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.911 [2024-07-26 11:00:11.183696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.911 [2024-07-26 11:00:11.190655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.911 [2024-07-26 11:00:11.190672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.911 [2024-07-26 11:00:11.202721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.911 [2024-07-26 11:00:11.202739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.911 [2024-07-26 11:00:11.214613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.911 [2024-07-26 11:00:11.214631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.911 [2024-07-26 11:00:11.222830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.911 [2024-07-26 11:00:11.222848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.911 [2024-07-26 11:00:11.233992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.911 [2024-07-26 11:00:11.234014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.911 [2024-07-26 11:00:11.244682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.911 [2024-07-26 11:00:11.244700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.911 [2024-07-26 11:00:11.252596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.911 [2024-07-26 11:00:11.252612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.911 [2024-07-26 11:00:11.260318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.911 [2024-07-26 11:00:11.260335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.911 [2024-07-26 11:00:11.270520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.911 [2024-07-26 11:00:11.270538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.911 [2024-07-26 11:00:11.279503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.911 [2024-07-26 11:00:11.279520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.911 [2024-07-26 11:00:11.288141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.912 [2024-07-26 11:00:11.288159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.912 [2024-07-26 11:00:11.297439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.912 [2024-07-26 11:00:11.297458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.912 [2024-07-26 11:00:11.305950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.912 [2024-07-26 11:00:11.305967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.912 [2024-07-26 11:00:11.314246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.912 [2024-07-26 11:00:11.314263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.912 [2024-07-26 11:00:11.322698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.912 [2024-07-26 11:00:11.322716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.912 [2024-07-26 11:00:11.331305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.912 [2024-07-26 11:00:11.331323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.912 [2024-07-26 11:00:11.340140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.912 [2024-07-26 11:00:11.340158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.912 [2024-07-26 11:00:11.348812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.912 [2024-07-26 11:00:11.348829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.912 [2024-07-26 11:00:11.357428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.912 [2024-07-26 11:00:11.357446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.912 [2024-07-26 11:00:11.366924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.912 [2024-07-26 11:00:11.366942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.912 [2024-07-26 11:00:11.374530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.912 [2024-07-26 11:00:11.374547] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.912 [2024-07-26 11:00:11.383610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.912 [2024-07-26 11:00:11.383627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.912 [2024-07-26 11:00:11.391012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.912 [2024-07-26 11:00:11.391029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:51.912 [2024-07-26 11:00:11.401343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:51.912 [2024-07-26 11:00:11.401365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.172 [2024-07-26 11:00:11.408578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.172 [2024-07-26 11:00:11.408596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.172 [2024-07-26 11:00:11.419235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.172 [2024-07-26 11:00:11.419253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.172 [2024-07-26 11:00:11.427150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.172 [2024-07-26 11:00:11.427167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.172 [2024-07-26 11:00:11.436766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.172 [2024-07-26 11:00:11.436784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.172 [2024-07-26 11:00:11.445681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.172 [2024-07-26 11:00:11.445699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.172 [2024-07-26 11:00:11.453624] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.172 [2024-07-26 11:00:11.453642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.172 [2024-07-26 11:00:11.462722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.172 [2024-07-26 11:00:11.462739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.172 [2024-07-26 11:00:11.471470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.172 [2024-07-26 11:00:11.471488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.172 [2024-07-26 11:00:11.480067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.172 [2024-07-26 11:00:11.480084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.172 [2024-07-26 11:00:11.487327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.172 [2024-07-26 11:00:11.487344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.172 [2024-07-26 11:00:11.499091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.172 [2024-07-26 11:00:11.499109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.172 [2024-07-26 11:00:11.508128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.172 [2024-07-26 11:00:11.508145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.172 [2024-07-26 11:00:11.516987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.172 [2024-07-26 11:00:11.517005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.172 [2024-07-26 11:00:11.526115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.172 [2024-07-26 11:00:11.526133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.172 [2024-07-26 11:00:11.535086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.172 [2024-07-26 11:00:11.535103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.172 [2024-07-26 11:00:11.543791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.172 [2024-07-26 11:00:11.543808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.172 [2024-07-26 11:00:11.551281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.172 [2024-07-26 11:00:11.551298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.172 [2024-07-26 11:00:11.561027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.172 [2024-07-26 11:00:11.561050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.172 [2024-07-26 11:00:11.570577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.172 [2024-07-26 11:00:11.570594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.172 [2024-07-26 11:00:11.578497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.172 [2024-07-26 11:00:11.578514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.172 [2024-07-26 11:00:11.587458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.172 [2024-07-26 11:00:11.587476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.172 [2024-07-26 11:00:11.594539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.173 [2024-07-26 11:00:11.594556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.173 [2024-07-26 11:00:11.604237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.173 [2024-07-26 11:00:11.604255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.173 [2024-07-26 11:00:11.613011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.173 [2024-07-26 11:00:11.613028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.173 [2024-07-26 11:00:11.620847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.173 [2024-07-26 11:00:11.620864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.173 [2024-07-26 11:00:11.630362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.173 [2024-07-26 11:00:11.630379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.173 [2024-07-26 11:00:11.639442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.173 [2024-07-26 11:00:11.639459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.173 [2024-07-26 11:00:11.647514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.173 [2024-07-26 11:00:11.647532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.173 [2024-07-26 11:00:11.659319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.173 [2024-07-26 11:00:11.659337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.433 [2024-07-26 11:00:11.669536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.433 [2024-07-26 11:00:11.669555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.433 [2024-07-26 11:00:11.677961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.433 [2024-07-26 11:00:11.677978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.433 [2024-07-26 11:00:11.686394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.433 [2024-07-26 11:00:11.686411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.433 [2024-07-26 11:00:11.695141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.433 [2024-07-26 11:00:11.695159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.433 [2024-07-26 11:00:11.702176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.433 [2024-07-26 11:00:11.702194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.433 [2024-07-26 11:00:11.713661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.433 [2024-07-26 11:00:11.713679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.433 [2024-07-26 11:00:11.721883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.433 [2024-07-26 11:00:11.721901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.433 [2024-07-26 11:00:11.730622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.433 [2024-07-26 11:00:11.730639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.433 [2024-07-26 11:00:11.742643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.434 [2024-07-26 11:00:11.742660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.434 [2024-07-26 11:00:11.753478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.434 [2024-07-26 11:00:11.753496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.434 [2024-07-26 11:00:11.762176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.434 [2024-07-26 11:00:11.762193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.434 [2024-07-26 11:00:11.772664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.434 [2024-07-26 11:00:11.772682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.434 [2024-07-26 11:00:11.779467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.434 [2024-07-26 11:00:11.779485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.434 [2024-07-26 11:00:11.792103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.434 [2024-07-26 11:00:11.792137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.434 [2024-07-26 11:00:11.801781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.434 [2024-07-26 11:00:11.801798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.434 [2024-07-26 11:00:11.810183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.434 [2024-07-26 11:00:11.810200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.434 [2024-07-26 11:00:11.817839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.434 [2024-07-26 11:00:11.817856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.434 [2024-07-26 11:00:11.827525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.434 [2024-07-26 11:00:11.827542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.434 [2024-07-26 11:00:11.837156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.434 [2024-07-26 11:00:11.837173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.434 [2024-07-26 11:00:11.844360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.434 [2024-07-26 11:00:11.844377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.434 [2024-07-26 11:00:11.852219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.434 [2024-07-26 11:00:11.852237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.434 [2024-07-26 11:00:11.862197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.434 [2024-07-26 11:00:11.862214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.434 [2024-07-26 11:00:11.870453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.434 [2024-07-26 11:00:11.870471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.434 [2024-07-26 11:00:11.880654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.434 [2024-07-26 11:00:11.880672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.434 [2024-07-26 11:00:11.889012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.434 [2024-07-26 11:00:11.889029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.434 [2024-07-26 11:00:11.900093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.434 [2024-07-26 11:00:11.900111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.434 [2024-07-26 11:00:11.911898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.434 [2024-07-26 11:00:11.911916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.434 [2024-07-26 11:00:11.921179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.434 [2024-07-26 11:00:11.921196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.434 [2024-07-26 11:00:11.929172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.434 [2024-07-26 11:00:11.929190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.694 [2024-07-26 11:00:11.941582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.694 [2024-07-26 11:00:11.941599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.694 [2024-07-26 11:00:11.949426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.694 [2024-07-26 11:00:11.949443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.694 [2024-07-26 11:00:11.959538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.694 [2024-07-26 11:00:11.959555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.694 [2024-07-26 11:00:11.967139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.694 [2024-07-26 11:00:11.967156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.694 [2024-07-26 11:00:11.976495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.694 [2024-07-26 11:00:11.976513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.694 [2024-07-26 11:00:11.987297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.694 [2024-07-26 11:00:11.987314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.694 [2024-07-26 11:00:11.998111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.694 [2024-07-26 11:00:11.998129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.694 [2024-07-26 11:00:12.007224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.694 [2024-07-26 11:00:12.007242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.694 [2024-07-26 11:00:12.016397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.694 [2024-07-26 11:00:12.016415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.694 [2024-07-26 11:00:12.024023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.694 [2024-07-26 11:00:12.024040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.694 [2024-07-26 11:00:12.036111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.694 [2024-07-26 11:00:12.036128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.694 [2024-07-26 11:00:12.046397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.694 [2024-07-26 11:00:12.046415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.694 [2024-07-26 11:00:12.054923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.694 [2024-07-26 11:00:12.054941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.694 [2024-07-26 11:00:12.063786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.694 [2024-07-26 11:00:12.063805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.694 [2024-07-26 11:00:12.081653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.694 [2024-07-26 11:00:12.081672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.694 [2024-07-26 11:00:12.089375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.694 [2024-07-26 11:00:12.089392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.694 [2024-07-26 11:00:12.099526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.694 [2024-07-26 11:00:12.099547] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.694 [2024-07-26 11:00:12.108459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.694 [2024-07-26 11:00:12.108477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.694 [2024-07-26 11:00:12.116063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.694 [2024-07-26 11:00:12.116080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.694 [2024-07-26 11:00:12.123973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.694 [2024-07-26 11:00:12.123990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.694 [2024-07-26 11:00:12.133834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.694 [2024-07-26 11:00:12.133851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.694 [2024-07-26 11:00:12.142562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.694 [2024-07-26 11:00:12.142579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.694 [2024-07-26 11:00:12.154932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.694 [2024-07-26 11:00:12.154950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.694 [2024-07-26 11:00:12.165910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.694 [2024-07-26 11:00:12.165927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.694 [2024-07-26 11:00:12.173980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.694 [2024-07-26 11:00:12.173997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.694 [2024-07-26 11:00:12.185325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.694 [2024-07-26 11:00:12.185343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.955 [2024-07-26 11:00:12.194623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.955 [2024-07-26 11:00:12.194642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.955 [2024-07-26 11:00:12.207387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.955 [2024-07-26 11:00:12.207404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.955 [2024-07-26 11:00:12.217778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.955 [2024-07-26 11:00:12.217795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.955 [2024-07-26 11:00:12.226649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.955 [2024-07-26 11:00:12.226665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.955 [2024-07-26 11:00:12.236780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.955 [2024-07-26 11:00:12.236797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.955 [2024-07-26 11:00:12.245021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.955 [2024-07-26 11:00:12.245038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.955 [2024-07-26 11:00:12.257625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.955 [2024-07-26 11:00:12.257642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.955 [2024-07-26 11:00:12.268252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.955 [2024-07-26 11:00:12.268269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.955 [2024-07-26 11:00:12.278741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.955 [2024-07-26 11:00:12.278759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.955 [2024-07-26 11:00:12.286247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.955 [2024-07-26 11:00:12.286268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.955 [2024-07-26 11:00:12.296737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.955 [2024-07-26 11:00:12.296755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.955 [2024-07-26 11:00:12.306501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.955 [2024-07-26 11:00:12.306518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.955 [2024-07-26 11:00:12.315260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.955 [2024-07-26 11:00:12.315277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.955 [2024-07-26 11:00:12.329559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.955 [2024-07-26 11:00:12.329576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.955 [2024-07-26 11:00:12.342245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.955 [2024-07-26 11:00:12.342262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.955 [2024-07-26 11:00:12.353368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.955 [2024-07-26 11:00:12.353386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.955 [2024-07-26 11:00:12.360896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.955 [2024-07-26 11:00:12.360914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.955 [2024-07-26 11:00:12.372433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.955 [2024-07-26 11:00:12.372450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.955 [2024-07-26 11:00:12.383997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.955 [2024-07-26 11:00:12.384014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.955 [2024-07-26 11:00:12.391781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.955 [2024-07-26 11:00:12.391798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.955 [2024-07-26 11:00:12.403111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.955 [2024-07-26 11:00:12.403128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.955 [2024-07-26 11:00:12.414165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.955 [2024-07-26 11:00:12.414184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.955 [2024-07-26 11:00:12.423070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.955 [2024-07-26 11:00:12.423088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.955 [2024-07-26 11:00:12.431774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.955 [2024-07-26 11:00:12.431793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.955 [2024-07-26 11:00:12.440311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.956 [2024-07-26 11:00:12.440330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:52.956 [2024-07-26 11:00:12.449618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:52.956 [2024-07-26 11:00:12.449637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.216 [2024-07-26 11:00:12.458216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.216 [2024-07-26 11:00:12.458236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.216 [2024-07-26 11:00:12.465146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.216 [2024-07-26 11:00:12.465164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.216 [2024-07-26 11:00:12.476432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.216 [2024-07-26 11:00:12.476455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.216 [2024-07-26 11:00:12.484296] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.216 [2024-07-26 11:00:12.484314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.216 [2024-07-26 11:00:12.492422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.216 [2024-07-26 11:00:12.492440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.216 [2024-07-26 11:00:12.501761] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.216 [2024-07-26 11:00:12.501778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.216 [2024-07-26 11:00:12.511057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.216 [2024-07-26 11:00:12.511075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.216 [2024-07-26 11:00:12.518164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.216 [2024-07-26 11:00:12.518181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.216 [2024-07-26 11:00:12.528254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.216 [2024-07-26 11:00:12.528272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.216 [2024-07-26 11:00:12.536618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.216 [2024-07-26 11:00:12.536636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.216 [2024-07-26 11:00:12.545028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.216 [2024-07-26 11:00:12.545052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.216 [2024-07-26 11:00:12.554185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.216 [2024-07-26 11:00:12.554203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.216 [2024-07-26 11:00:12.561821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.216 [2024-07-26 11:00:12.561839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.216 [2024-07-26 11:00:12.572610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.216 [2024-07-26 11:00:12.572627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.216 [2024-07-26 11:00:12.581298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.216 [2024-07-26 11:00:12.581316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.216 [2024-07-26 11:00:12.589927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.216 [2024-07-26 11:00:12.589944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.216 [2024-07-26 11:00:12.598689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.216 [2024-07-26 11:00:12.598707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.216 [2024-07-26 11:00:12.608597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.216 [2024-07-26 11:00:12.608615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.216 [2024-07-26 11:00:12.617541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.216 [2024-07-26 11:00:12.617559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.216 [2024-07-26 11:00:12.626408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.216 [2024-07-26 11:00:12.626426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.216 [2024-07-26 11:00:12.634948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.216 [2024-07-26 11:00:12.634965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.216 [2024-07-26 11:00:12.641757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.216 [2024-07-26 11:00:12.641779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.216 [2024-07-26 11:00:12.652406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.216 [2024-07-26 11:00:12.652426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.216 [2024-07-26 11:00:12.660833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.216 [2024-07-26 11:00:12.660851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.216 [2024-07-26 11:00:12.669969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.216 [2024-07-26 11:00:12.669988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.216 [2024-07-26 11:00:12.679139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.216 [2024-07-26 11:00:12.679157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.216 [2024-07-26 11:00:12.687768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.216 [2024-07-26 11:00:12.687786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.216 [2024-07-26 11:00:12.696446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.216 [2024-07-26 11:00:12.696464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.216 [2024-07-26 11:00:12.704909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.216 [2024-07-26 11:00:12.704927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.476 [2024-07-26 11:00:12.713595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.476 [2024-07-26 11:00:12.713613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.476 [2024-07-26 11:00:12.722710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.476 [2024-07-26 11:00:12.722728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.476 [2024-07-26 11:00:12.730030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.476 [2024-07-26 11:00:12.730055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.476 [2024-07-26 11:00:12.739981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.476 [2024-07-26 11:00:12.739999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.476 [2024-07-26 11:00:12.747876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.476 [2024-07-26 11:00:12.747893] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.476 [2024-07-26 11:00:12.757214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.476 [2024-07-26 11:00:12.757233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.476 [2024-07-26 11:00:12.766014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.476 [2024-07-26 11:00:12.766032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.476 [2024-07-26 11:00:12.772850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.476 [2024-07-26 11:00:12.772867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.477 [2024-07-26 11:00:12.783233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.477 [2024-07-26 11:00:12.783251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.477 [2024-07-26 11:00:12.792318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.477 [2024-07-26 11:00:12.792335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.477 [2024-07-26 11:00:12.802257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.477 [2024-07-26 11:00:12.802274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.477 [2024-07-26 11:00:12.811576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.477 [2024-07-26 11:00:12.811597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.477 [2024-07-26 11:00:12.820238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.477 [2024-07-26 11:00:12.820255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.477 [2024-07-26 11:00:12.829092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.477 [2024-07-26 11:00:12.829109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.477 [2024-07-26 11:00:12.836719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.477 [2024-07-26 11:00:12.836736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.477 [2024-07-26 11:00:12.845550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.477 [2024-07-26 11:00:12.845568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.477 [2024-07-26 11:00:12.854238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.477 [2024-07-26 11:00:12.854255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.477 [2024-07-26 11:00:12.862300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.477 [2024-07-26 11:00:12.862318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.477 [2024-07-26 11:00:12.870582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.477 [2024-07-26 11:00:12.870598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.477 [2024-07-26 11:00:12.879549] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.477 [2024-07-26 11:00:12.879566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.477 [2024-07-26 11:00:12.887210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.477 [2024-07-26 11:00:12.887227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.477 [2024-07-26 11:00:12.896927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.477 [2024-07-26 11:00:12.896945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.477 [2024-07-26 11:00:12.905660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.477 [2024-07-26 11:00:12.905678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.477 [2024-07-26 11:00:12.915359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.477 [2024-07-26 11:00:12.915376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.477 [2024-07-26 11:00:12.925317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.477 [2024-07-26 11:00:12.925336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.477 [2024-07-26 11:00:12.933601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.477 [2024-07-26 11:00:12.933618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.477 [2024-07-26 11:00:12.941309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.477 [2024-07-26 11:00:12.941326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.477 [2024-07-26 11:00:12.951534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.477 [2024-07-26 11:00:12.951551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.477 [2024-07-26 11:00:12.959519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.477 [2024-07-26 11:00:12.959536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.477 [2024-07-26 11:00:12.968332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.477 [2024-07-26 11:00:12.968349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.737 [2024-07-26 11:00:12.977139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.737 [2024-07-26 11:00:12.977158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.737 [2024-07-26 11:00:12.985816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.738 [2024-07-26 11:00:12.985833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.738 [2024-07-26 11:00:12.994623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.738 [2024-07-26 11:00:12.994640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.738 [2024-07-26 11:00:13.004125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.738 [2024-07-26 11:00:13.004143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.738 [2024-07-26 11:00:13.013006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.738 [2024-07-26 11:00:13.013024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.738 [2024-07-26 11:00:13.021937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.738 [2024-07-26 11:00:13.021955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.738 [2024-07-26 11:00:13.029247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.738 [2024-07-26 11:00:13.029263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.738 [2024-07-26 11:00:13.040912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.738 [2024-07-26 11:00:13.040930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.738 [2024-07-26 11:00:13.051934] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.738 [2024-07-26 11:00:13.051952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.738 [2024-07-26 11:00:13.059332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.738 [2024-07-26 11:00:13.059349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.738 [2024-07-26 11:00:13.072271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.738 [2024-07-26 11:00:13.072290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.738 [2024-07-26 11:00:13.080640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.738 [2024-07-26 11:00:13.080658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.738 [2024-07-26 11:00:13.088672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.738 [2024-07-26 11:00:13.088690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.738 [2024-07-26 11:00:13.097120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.738 [2024-07-26 11:00:13.097137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.738 [2024-07-26 11:00:13.106572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.738 [2024-07-26 11:00:13.106589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.738 [2024-07-26 11:00:13.114707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.738 [2024-07-26 11:00:13.114724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.738 [2024-07-26 11:00:13.124832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.738 [2024-07-26 11:00:13.124849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.738 [2024-07-26 11:00:13.132447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.738 [2024-07-26 11:00:13.132463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.738 [2024-07-26 11:00:13.141243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.738 [2024-07-26 11:00:13.141260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.738 [2024-07-26 11:00:13.149618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.738 [2024-07-26 11:00:13.149635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.738 [2024-07-26 11:00:13.158101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.738 [2024-07-26 11:00:13.158118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.738 [2024-07-26 11:00:13.165902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.738 [2024-07-26 11:00:13.165919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.738 [2024-07-26 11:00:13.174599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.738 [2024-07-26 11:00:13.174615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.738 [2024-07-26 11:00:13.183235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.738 [2024-07-26 11:00:13.183252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.738 [2024-07-26 11:00:13.193974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.738 [2024-07-26 11:00:13.193990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.738 [2024-07-26 11:00:13.201170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.738 [2024-07-26 11:00:13.201187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.738 [2024-07-26 11:00:13.211963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.738 [2024-07-26 11:00:13.211979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.738 [2024-07-26 11:00:13.223242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.738 [2024-07-26 11:00:13.223259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.738 [2024-07-26 11:00:13.232055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.738 [2024-07-26 11:00:13.232073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.999 [2024-07-26 11:00:13.241890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.999 [2024-07-26 11:00:13.241908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.999 [2024-07-26 11:00:13.251494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.999 [2024-07-26 11:00:13.251510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.999 [2024-07-26 11:00:13.260826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.999 [2024-07-26 11:00:13.260843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.999 [2024-07-26 11:00:13.269550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.999 [2024-07-26 11:00:13.269567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.999 [2024-07-26 11:00:13.277614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.999 [2024-07-26 11:00:13.277631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.999 [2024-07-26 11:00:13.286372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.999 [2024-07-26 11:00:13.286389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.999 [2024-07-26 11:00:13.295885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.999 [2024-07-26 11:00:13.295903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.999 [2024-07-26 11:00:13.304041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.999 [2024-07-26 11:00:13.304062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.999 [2024-07-26 11:00:13.314127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.999 [2024-07-26 11:00:13.314145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.999 [2024-07-26 11:00:13.322787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.999 [2024-07-26 11:00:13.322804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.999 [2024-07-26 11:00:13.330449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.999 [2024-07-26 11:00:13.330466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.999 [2024-07-26 11:00:13.338059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.999 [2024-07-26 11:00:13.338075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.999 [2024-07-26 11:00:13.348247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.999 [2024-07-26 11:00:13.348264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.999 [2024-07-26 11:00:13.356866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.999 [2024-07-26 11:00:13.356884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.999 [2024-07-26 11:00:13.363855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.999 [2024-07-26 11:00:13.363872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.999 00:11:53.999 Latency(us) 00:11:53.999 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:53.999 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:53.999 Nvme1n1 : 5.00 14830.47 115.86 0.00 0.00 8623.01 2008.82 60635.05 00:11:53.999 =================================================================================================================== 00:11:53.999 Total : 14830.47 115.86 0.00 0.00 8623.01 2008.82 60635.05 00:11:53.999 [2024-07-26 11:00:13.371299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.999 [2024-07-26 11:00:13.371312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.999 [2024-07-26 11:00:13.379320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.999 [2024-07-26 11:00:13.379333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.999 [2024-07-26 11:00:13.387339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.999 [2024-07-26 11:00:13.387348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.999 [2024-07-26 11:00:13.395370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.999 [2024-07-26 11:00:13.395387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.999 [2024-07-26 11:00:13.403386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.999 [2024-07-26 11:00:13.403398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.999 [2024-07-26 11:00:13.411403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.999 [2024-07-26 11:00:13.411415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.999 [2024-07-26 11:00:13.419427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.999 [2024-07-26 11:00:13.419437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.999 [2024-07-26 11:00:13.427448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.999 [2024-07-26 11:00:13.427458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.999 [2024-07-26 11:00:13.435468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.999 [2024-07-26 11:00:13.435479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.999 [2024-07-26 11:00:13.443491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.999 [2024-07-26 11:00:13.443508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.999 [2024-07-26 11:00:13.451511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.999 [2024-07-26 11:00:13.451521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.999 [2024-07-26 11:00:13.459534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.999 [2024-07-26 11:00:13.459544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.999 [2024-07-26 11:00:13.467554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.999 [2024-07-26 11:00:13.467565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.999 [2024-07-26 11:00:13.475574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.999 [2024-07-26 11:00:13.475583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.999 [2024-07-26 11:00:13.483597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.999 [2024-07-26 11:00:13.483605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:53.999 [2024-07-26 11:00:13.491621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:53.999 [2024-07-26 11:00:13.491632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.259 [2024-07-26 11:00:13.499644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.259 [2024-07-26 11:00:13.499656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.259 [2024-07-26 11:00:13.507665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.259 [2024-07-26 11:00:13.507677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.259 [2024-07-26 11:00:13.515683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.259 [2024-07-26 11:00:13.515691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.259 [2024-07-26 11:00:13.523704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.259 [2024-07-26 11:00:13.523713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.259 [2024-07-26 11:00:13.531727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.259 [2024-07-26 11:00:13.531737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.259 [2024-07-26 11:00:13.539748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.259 [2024-07-26 11:00:13.539757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.259 [2024-07-26 11:00:13.547769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:54.259 [2024-07-26 11:00:13.547778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1348607) - No such process 00:11:54.259 11:00:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1348607 00:11:54.259 11:00:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:54.259 11:00:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.259 11:00:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:54.259 11:00:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.259 11:00:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:54.259 11:00:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.259 11:00:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:54.259 delay0 00:11:54.259 11:00:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.259 11:00:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:54.259 11:00:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.259 11:00:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:54.259 11:00:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.259 11:00:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:11:54.259 EAL: No free 2048 kB hugepages reported on node 1 00:11:54.259 [2024-07-26 11:00:13.675747] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:00.846 Initializing NVMe Controllers 00:12:00.846 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:00.846 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:00.846 Initialization complete. Launching workers. 00:12:00.846 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 85 00:12:00.846 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 362, failed to submit 43 00:12:00.846 success 167, unsuccess 195, failed 0 00:12:00.846 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:12:00.846 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:12:00.846 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:00.846 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:12:00.846 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:00.846 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:12:00.846 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:00.846 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:00.846 rmmod nvme_tcp 00:12:00.846 rmmod nvme_fabrics 00:12:00.846 rmmod nvme_keyring 00:12:00.846 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:00.846 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:12:00.846 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:12:00.846 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1346624 ']' 00:12:00.846 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1346624 00:12:00.846 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 1346624 ']' 00:12:00.846 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 1346624 00:12:00.846 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:12:00.846 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:00.846 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1346624 00:12:00.846 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:00.846 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:00.846 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1346624' 00:12:00.846 killing process with pid 1346624 00:12:00.846 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 1346624 00:12:00.846 11:00:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 1346624 00:12:00.846 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:00.846 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:00.846 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:00.846 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:00.846 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:00.846 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.846 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:00.846 11:00:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.801 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:02.801 00:12:02.801 real 0m31.582s 00:12:02.801 user 0m42.717s 00:12:02.801 sys 0m10.653s 00:12:02.801 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:02.801 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:02.801 ************************************ 00:12:02.801 END TEST nvmf_zcopy 00:12:02.801 ************************************ 00:12:02.801 11:00:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:02.801 11:00:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:02.801 11:00:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:02.801 11:00:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:02.801 ************************************ 00:12:02.801 START TEST nvmf_nmic 00:12:02.801 ************************************ 00:12:02.801 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:03.061 * Looking for test storage... 00:12:03.061 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:12:03.061 11:00:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:08.339 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:08.339 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:08.339 Found net devices under 0000:86:00.0: cvl_0_0 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:08.339 Found net devices under 0000:86:00.1: cvl_0_1 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:08.339 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:08.600 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:08.600 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:08.600 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:08.600 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:08.600 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:12:08.600 00:12:08.600 --- 10.0.0.2 ping statistics --- 00:12:08.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.600 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:12:08.600 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:08.600 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:08.600 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:12:08.600 00:12:08.600 --- 10.0.0.1 ping statistics --- 00:12:08.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.600 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:12:08.600 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:08.600 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:12:08.600 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:08.600 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:08.600 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:08.600 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:08.600 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:08.600 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:08.600 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:08.600 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:12:08.600 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:08.600 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:08.600 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:08.600 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1354544 00:12:08.600 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1354544 00:12:08.600 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:08.600 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 1354544 ']' 00:12:08.600 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.600 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:08.600 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.600 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:08.600 11:00:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:08.600 [2024-07-26 11:00:28.026376] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:08.600 [2024-07-26 11:00:28.026418] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:08.600 EAL: No free 2048 kB hugepages reported on node 1 00:12:08.600 [2024-07-26 11:00:28.083187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:08.860 [2024-07-26 11:00:28.157950] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:08.861 [2024-07-26 11:00:28.157993] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:08.861 [2024-07-26 11:00:28.157999] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:08.861 [2024-07-26 11:00:28.158005] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:08.861 [2024-07-26 11:00:28.158010] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:08.861 [2024-07-26 11:00:28.158078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.861 [2024-07-26 11:00:28.158173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:08.861 [2024-07-26 11:00:28.158261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:08.861 [2024-07-26 11:00:28.158262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.430 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:09.430 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:12:09.430 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:09.430 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:09.431 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:09.431 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:09.431 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:09.431 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.431 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:09.431 [2024-07-26 11:00:28.883319] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:09.431 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.431 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:09.431 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.431 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:09.431 Malloc0 00:12:09.431 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.431 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:09.431 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.431 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:09.431 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.431 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:09.431 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.431 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:09.690 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.690 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:09.690 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.690 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:09.690 [2024-07-26 11:00:28.935255] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:09.690 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.690 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:12:09.690 test case1: single bdev can't be used in multiple subsystems 00:12:09.690 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:09.690 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.690 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:09.690 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.690 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:09.690 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.690 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:09.690 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.690 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:12:09.690 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:12:09.690 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.690 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:09.690 [2024-07-26 11:00:28.959164] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:12:09.690 [2024-07-26 11:00:28.959182] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:12:09.690 [2024-07-26 11:00:28.959189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.690 request: 00:12:09.690 { 00:12:09.690 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:09.690 "namespace": { 00:12:09.690 "bdev_name": "Malloc0", 00:12:09.690 "no_auto_visible": false 00:12:09.690 }, 00:12:09.690 "method": "nvmf_subsystem_add_ns", 00:12:09.690 "req_id": 1 00:12:09.690 } 00:12:09.690 Got JSON-RPC error response 00:12:09.690 response: 00:12:09.690 { 00:12:09.690 "code": -32602, 00:12:09.690 "message": "Invalid parameters" 00:12:09.690 } 00:12:09.690 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:09.690 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:12:09.690 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:12:09.690 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:12:09.690 Adding namespace failed - expected result. 00:12:09.691 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:12:09.691 test case2: host connect to nvmf target in multiple paths 00:12:09.691 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:12:09.691 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.691 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:09.691 [2024-07-26 11:00:28.971295] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:12:09.691 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.691 11:00:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:11.071 11:00:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:12:12.010 11:00:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:12:12.010 11:00:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:12:12.010 11:00:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:12.010 11:00:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:12.010 11:00:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:12:13.920 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:13.920 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:13.920 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:13.920 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:13.920 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:13.920 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:12:13.920 11:00:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:13.920 [global] 00:12:13.920 thread=1 00:12:13.920 invalidate=1 00:12:13.920 rw=write 00:12:13.920 time_based=1 00:12:13.920 runtime=1 00:12:13.920 ioengine=libaio 00:12:13.920 direct=1 00:12:13.920 bs=4096 00:12:13.920 iodepth=1 00:12:13.920 norandommap=0 00:12:13.920 numjobs=1 00:12:13.920 00:12:13.920 verify_dump=1 00:12:13.920 verify_backlog=512 00:12:13.920 verify_state_save=0 00:12:13.920 do_verify=1 00:12:13.920 verify=crc32c-intel 00:12:14.177 [job0] 00:12:14.177 filename=/dev/nvme0n1 00:12:14.177 Could not set queue depth (nvme0n1) 00:12:14.435 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:14.435 fio-3.35 00:12:14.435 Starting 1 thread 00:12:15.373 00:12:15.373 job0: (groupid=0, jobs=1): err= 0: pid=1355671: Fri Jul 26 11:00:34 2024 00:12:15.373 read: IOPS=19, BW=76.9KiB/s (78.8kB/s)(80.0KiB/1040msec) 00:12:15.373 slat (nsec): min=9842, max=24293, avg=21395.45, stdev=2857.78 00:12:15.373 clat (usec): min=41775, max=42158, avg=41953.80, stdev=90.89 00:12:15.373 lat (usec): min=41798, max=42180, avg=41975.19, stdev=90.99 00:12:15.373 clat percentiles (usec): 00:12:15.373 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:12:15.373 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:12:15.373 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:12:15.373 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:15.373 | 99.99th=[42206] 00:12:15.373 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:12:15.373 slat (nsec): min=10581, max=45978, avg=14066.76, stdev=5941.17 00:12:15.373 clat (usec): min=223, max=922, avg=374.02, stdev=203.83 00:12:15.373 lat (usec): min=258, max=957, avg=388.09, stdev=208.31 00:12:15.373 clat percentiles (usec): 00:12:15.373 | 1.00th=[ 249], 5.00th=[ 249], 10.00th=[ 251], 20.00th=[ 253], 00:12:15.373 | 30.00th=[ 253], 40.00th=[ 255], 50.00th=[ 258], 60.00th=[ 269], 00:12:15.373 | 70.00th=[ 338], 80.00th=[ 486], 90.00th=[ 783], 95.00th=[ 848], 00:12:15.373 | 99.00th=[ 906], 99.50th=[ 914], 99.90th=[ 922], 99.95th=[ 922], 00:12:15.373 | 99.99th=[ 922] 00:12:15.373 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:12:15.373 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:15.373 lat (usec) : 250=8.46%, 500=68.98%, 750=7.52%, 1000=11.28% 00:12:15.373 lat (msec) : 50=3.76% 00:12:15.373 cpu : usr=0.10%, sys=1.35%, ctx=532, majf=0, minf=2 00:12:15.373 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:15.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.373 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.373 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:15.373 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:15.373 00:12:15.373 Run status group 0 (all jobs): 00:12:15.373 READ: bw=76.9KiB/s (78.8kB/s), 76.9KiB/s-76.9KiB/s (78.8kB/s-78.8kB/s), io=80.0KiB (81.9kB), run=1040-1040msec 00:12:15.373 WRITE: bw=1969KiB/s (2016kB/s), 1969KiB/s-1969KiB/s (2016kB/s-2016kB/s), io=2048KiB (2097kB), run=1040-1040msec 00:12:15.373 00:12:15.373 Disk stats (read/write): 00:12:15.373 nvme0n1: ios=66/512, merge=0/0, ticks=750/190, in_queue=940, util=93.39% 00:12:15.633 11:00:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:15.633 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:15.633 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:15.633 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:12:15.633 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:15.633 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.633 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:15.633 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.633 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:12:15.633 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:12:15.633 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:12:15.633 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:15.633 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:12:15.633 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:15.633 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:12:15.633 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:15.633 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:15.633 rmmod nvme_tcp 00:12:15.633 rmmod nvme_fabrics 00:12:15.633 rmmod nvme_keyring 00:12:15.633 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:15.633 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:12:15.633 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:12:15.633 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1354544 ']' 00:12:15.633 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1354544 00:12:15.633 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 1354544 ']' 00:12:15.633 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 1354544 00:12:15.633 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:12:15.633 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:15.633 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1354544 00:12:15.893 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:15.893 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:15.893 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1354544' 00:12:15.893 killing process with pid 1354544 00:12:15.893 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 1354544 00:12:15.893 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 1354544 00:12:15.893 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:15.893 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:15.893 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:15.893 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:15.893 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:15.893 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.893 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:15.893 11:00:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.435 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:18.435 00:12:18.435 real 0m15.180s 00:12:18.435 user 0m35.726s 00:12:18.435 sys 0m4.840s 00:12:18.435 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:18.435 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:18.435 ************************************ 00:12:18.435 END TEST nvmf_nmic 00:12:18.435 ************************************ 00:12:18.435 11:00:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:18.435 11:00:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:18.435 11:00:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:18.435 11:00:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:18.435 ************************************ 00:12:18.435 START TEST nvmf_fio_target 00:12:18.435 ************************************ 00:12:18.435 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:18.435 * Looking for test storage... 00:12:18.435 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:18.435 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:18.435 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:12:18.435 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:18.435 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:18.435 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:18.435 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:18.435 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:18.435 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:18.435 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:18.435 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:18.435 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:18.435 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:18.435 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:18.435 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:18.435 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:18.435 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:18.435 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:18.435 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:18.435 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:18.435 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:18.435 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:18.435 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:18.435 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.435 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.436 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.436 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:12:18.436 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.436 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:12:18.436 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:18.436 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:18.436 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:18.436 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:18.436 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:18.436 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:18.436 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:18.436 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:18.436 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:18.436 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:18.436 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:18.436 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:12:18.436 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:18.436 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:18.436 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:18.436 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:18.436 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:18.436 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.436 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:18.436 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.436 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:18.436 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:18.436 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:12:18.436 11:00:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.722 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:23.722 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:12:23.722 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:23.722 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:23.722 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:23.722 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:23.722 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:23.722 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:12:23.722 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:23.722 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:12:23.722 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:12:23.722 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:12:23.722 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:12:23.722 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:12:23.722 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:23.723 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:23.723 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:23.723 Found net devices under 0000:86:00.0: cvl_0_0 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:23.723 Found net devices under 0000:86:00.1: cvl_0_1 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:23.723 11:00:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:23.723 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:23.723 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:23.723 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:12:23.723 00:12:23.723 --- 10.0.0.2 ping statistics --- 00:12:23.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.723 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:12:23.723 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:23.723 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:23.723 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:12:23.723 00:12:23.723 --- 10.0.0.1 ping statistics --- 00:12:23.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.723 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:12:23.723 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:23.723 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:12:23.723 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:23.723 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:23.723 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:23.723 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:23.723 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:23.723 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:23.723 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:23.723 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:23.723 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:23.723 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:23.723 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.723 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1359213 00:12:23.723 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:23.723 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1359213 00:12:23.723 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1359213 ']' 00:12:23.723 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.724 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:23.724 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.724 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:23.724 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.724 [2024-07-26 11:00:43.102187] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:23.724 [2024-07-26 11:00:43.102232] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:23.724 EAL: No free 2048 kB hugepages reported on node 1 00:12:23.724 [2024-07-26 11:00:43.162813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:23.985 [2024-07-26 11:00:43.243894] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:23.985 [2024-07-26 11:00:43.243934] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:23.985 [2024-07-26 11:00:43.243941] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:23.985 [2024-07-26 11:00:43.243947] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:23.985 [2024-07-26 11:00:43.243955] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:23.985 [2024-07-26 11:00:43.243999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.985 [2024-07-26 11:00:43.244019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:23.985 [2024-07-26 11:00:43.244050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.985 [2024-07-26 11:00:43.244055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:24.555 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:24.555 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:12:24.555 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:24.555 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:24.555 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.555 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:24.555 11:00:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:24.815 [2024-07-26 11:00:44.093994] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:24.815 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:25.076 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:25.076 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:25.076 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:12:25.076 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:25.336 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:12:25.337 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:25.675 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:12:25.675 11:00:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:12:25.675 11:00:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:25.936 11:00:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:12:25.936 11:00:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:26.196 11:00:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:12:26.196 11:00:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:26.456 11:00:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:26.456 11:00:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:26.456 11:00:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:26.715 11:00:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:26.715 11:00:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:26.975 11:00:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:26.975 11:00:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:26.975 11:00:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:27.235 [2024-07-26 11:00:46.596291] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:27.235 11:00:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:27.495 11:00:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:27.495 11:00:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:28.875 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:28.875 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:12:28.875 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:28.875 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:12:28.875 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:12:28.875 11:00:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:12:30.784 11:00:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:30.784 11:00:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:30.784 11:00:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:30.784 11:00:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:12:30.784 11:00:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:30.784 11:00:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:12:30.784 11:00:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:30.784 [global] 00:12:30.784 thread=1 00:12:30.784 invalidate=1 00:12:30.784 rw=write 00:12:30.784 time_based=1 00:12:30.784 runtime=1 00:12:30.784 ioengine=libaio 00:12:30.784 direct=1 00:12:30.784 bs=4096 00:12:30.784 iodepth=1 00:12:30.784 norandommap=0 00:12:30.784 numjobs=1 00:12:30.784 00:12:30.784 verify_dump=1 00:12:30.784 verify_backlog=512 00:12:30.784 verify_state_save=0 00:12:30.784 do_verify=1 00:12:30.784 verify=crc32c-intel 00:12:30.784 [job0] 00:12:30.784 filename=/dev/nvme0n1 00:12:30.784 [job1] 00:12:30.784 filename=/dev/nvme0n2 00:12:30.784 [job2] 00:12:30.784 filename=/dev/nvme0n3 00:12:30.784 [job3] 00:12:30.784 filename=/dev/nvme0n4 00:12:30.784 Could not set queue depth (nvme0n1) 00:12:30.784 Could not set queue depth (nvme0n2) 00:12:30.784 Could not set queue depth (nvme0n3) 00:12:30.784 Could not set queue depth (nvme0n4) 00:12:31.044 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:31.044 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:31.044 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:31.044 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:31.044 fio-3.35 00:12:31.044 Starting 4 threads 00:12:32.426 00:12:32.426 job0: (groupid=0, jobs=1): err= 0: pid=1360780: Fri Jul 26 11:00:51 2024 00:12:32.426 read: IOPS=421, BW=1686KiB/s (1727kB/s)(1688KiB/1001msec) 00:12:32.426 slat (nsec): min=6698, max=25003, avg=9247.17, stdev=3089.01 00:12:32.426 clat (usec): min=360, max=42963, avg=1874.27, stdev=7188.00 00:12:32.426 lat (usec): min=368, max=42987, avg=1883.52, stdev=7190.08 00:12:32.426 clat percentiles (usec): 00:12:32.426 | 1.00th=[ 371], 5.00th=[ 392], 10.00th=[ 424], 20.00th=[ 482], 00:12:32.426 | 30.00th=[ 537], 40.00th=[ 562], 50.00th=[ 578], 60.00th=[ 594], 00:12:32.426 | 70.00th=[ 619], 80.00th=[ 644], 90.00th=[ 758], 95.00th=[ 1205], 00:12:32.426 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:12:32.426 | 99.99th=[42730] 00:12:32.426 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:12:32.426 slat (usec): min=6, max=3245, avg=22.86, stdev=148.86 00:12:32.426 clat (usec): min=249, max=3002, avg=371.96, stdev=228.59 00:12:32.426 lat (usec): min=261, max=5232, avg=394.82, stdev=315.91 00:12:32.426 clat percentiles (usec): 00:12:32.426 | 1.00th=[ 253], 5.00th=[ 260], 10.00th=[ 262], 20.00th=[ 269], 00:12:32.426 | 30.00th=[ 273], 40.00th=[ 285], 50.00th=[ 302], 60.00th=[ 322], 00:12:32.426 | 70.00th=[ 338], 80.00th=[ 392], 90.00th=[ 537], 95.00th=[ 750], 00:12:32.426 | 99.00th=[ 1205], 99.50th=[ 1532], 99.90th=[ 2999], 99.95th=[ 2999], 00:12:32.426 | 99.99th=[ 2999] 00:12:32.426 bw ( KiB/s): min= 4096, max= 4096, per=23.43%, avg=4096.00, stdev= 0.00, samples=1 00:12:32.426 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:32.426 lat (usec) : 250=0.11%, 500=58.78%, 750=33.62%, 1000=3.32% 00:12:32.426 lat (msec) : 2=2.46%, 4=0.32%, 50=1.39% 00:12:32.426 cpu : usr=0.60%, sys=1.00%, ctx=936, majf=0, minf=1 00:12:32.426 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:32.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.426 issued rwts: total=422,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:32.426 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:32.426 job1: (groupid=0, jobs=1): err= 0: pid=1360781: Fri Jul 26 11:00:51 2024 00:12:32.426 read: IOPS=1003, BW=4016KiB/s (4112kB/s)(4100KiB/1021msec) 00:12:32.426 slat (nsec): min=4506, max=25887, avg=7610.64, stdev=1197.07 00:12:32.426 clat (usec): min=351, max=41158, avg=520.47, stdev=1271.81 00:12:32.426 lat (usec): min=358, max=41168, avg=528.08, stdev=1271.87 00:12:32.426 clat percentiles (usec): 00:12:32.426 | 1.00th=[ 363], 5.00th=[ 383], 10.00th=[ 416], 20.00th=[ 461], 00:12:32.427 | 30.00th=[ 469], 40.00th=[ 478], 50.00th=[ 482], 60.00th=[ 490], 00:12:32.427 | 70.00th=[ 498], 80.00th=[ 506], 90.00th=[ 519], 95.00th=[ 529], 00:12:32.427 | 99.00th=[ 791], 99.50th=[ 799], 99.90th=[ 1139], 99.95th=[41157], 00:12:32.427 | 99.99th=[41157] 00:12:32.427 write: IOPS=1504, BW=6018KiB/s (6162kB/s)(6144KiB/1021msec); 0 zone resets 00:12:32.427 slat (usec): min=7, max=2951, avg=13.46, stdev=77.63 00:12:32.427 clat (usec): min=241, max=1743, avg=294.26, stdev=96.92 00:12:32.427 lat (usec): min=249, max=3774, avg=307.72, stdev=133.97 00:12:32.427 clat percentiles (usec): 00:12:32.427 | 1.00th=[ 245], 5.00th=[ 247], 10.00th=[ 249], 20.00th=[ 251], 00:12:32.427 | 30.00th=[ 255], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 277], 00:12:32.427 | 70.00th=[ 293], 80.00th=[ 314], 90.00th=[ 343], 95.00th=[ 396], 00:12:32.427 | 99.00th=[ 701], 99.50th=[ 996], 99.90th=[ 1549], 99.95th=[ 1745], 00:12:32.427 | 99.99th=[ 1745] 00:12:32.427 bw ( KiB/s): min= 5704, max= 6584, per=35.15%, avg=6144.00, stdev=622.25, samples=2 00:12:32.427 iops : min= 1426, max= 1646, avg=1536.00, stdev=155.56, samples=2 00:12:32.427 lat (usec) : 250=8.12%, 500=80.20%, 750=10.66%, 1000=0.66% 00:12:32.427 lat (msec) : 2=0.31%, 50=0.04% 00:12:32.427 cpu : usr=1.08%, sys=2.75%, ctx=2564, majf=0, minf=1 00:12:32.427 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:32.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.427 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.427 issued rwts: total=1025,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:32.427 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:32.427 job2: (groupid=0, jobs=1): err= 0: pid=1360782: Fri Jul 26 11:00:51 2024 00:12:32.427 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:12:32.427 slat (nsec): min=6505, max=24482, avg=7722.19, stdev=1176.39 00:12:32.427 clat (usec): min=383, max=1450, avg=529.91, stdev=51.65 00:12:32.427 lat (usec): min=392, max=1458, avg=537.63, stdev=51.58 00:12:32.427 clat percentiles (usec): 00:12:32.427 | 1.00th=[ 416], 5.00th=[ 490], 10.00th=[ 498], 20.00th=[ 506], 00:12:32.427 | 30.00th=[ 515], 40.00th=[ 519], 50.00th=[ 529], 60.00th=[ 529], 00:12:32.427 | 70.00th=[ 537], 80.00th=[ 553], 90.00th=[ 562], 95.00th=[ 578], 00:12:32.427 | 99.00th=[ 635], 99.50th=[ 676], 99.90th=[ 1336], 99.95th=[ 1450], 00:12:32.427 | 99.99th=[ 1450] 00:12:32.427 write: IOPS=1276, BW=5107KiB/s (5229kB/s)(5112KiB/1001msec); 0 zone resets 00:12:32.427 slat (nsec): min=4606, max=34360, avg=11704.66, stdev=2445.82 00:12:32.427 clat (usec): min=243, max=1726, avg=335.35, stdev=141.54 00:12:32.427 lat (usec): min=256, max=1761, avg=347.06, stdev=141.92 00:12:32.427 clat percentiles (usec): 00:12:32.427 | 1.00th=[ 247], 5.00th=[ 251], 10.00th=[ 253], 20.00th=[ 260], 00:12:32.427 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 293], 60.00th=[ 310], 00:12:32.427 | 70.00th=[ 322], 80.00th=[ 363], 90.00th=[ 465], 95.00th=[ 586], 00:12:32.427 | 99.00th=[ 971], 99.50th=[ 1020], 99.90th=[ 1647], 99.95th=[ 1729], 00:12:32.427 | 99.99th=[ 1729] 00:12:32.427 bw ( KiB/s): min= 4976, max= 4976, per=28.47%, avg=4976.00, stdev= 0.00, samples=1 00:12:32.427 iops : min= 1244, max= 1244, avg=1244.00, stdev= 0.00, samples=1 00:12:32.427 lat (usec) : 250=2.52%, 500=53.69%, 750=42.40%, 1000=0.96% 00:12:32.427 lat (msec) : 2=0.43% 00:12:32.427 cpu : usr=1.10%, sys=2.40%, ctx=2304, majf=0, minf=2 00:12:32.427 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:32.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.427 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.427 issued rwts: total=1024,1278,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:32.427 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:32.427 job3: (groupid=0, jobs=1): err= 0: pid=1360783: Fri Jul 26 11:00:51 2024 00:12:32.427 read: IOPS=1024, BW=4096KiB/s (4194kB/s)(4096KiB/1000msec) 00:12:32.427 slat (nsec): min=7007, max=30681, avg=8044.40, stdev=1475.07 00:12:32.427 clat (usec): min=473, max=19585, avg=595.58, stdev=595.84 00:12:32.427 lat (usec): min=481, max=19593, avg=603.63, stdev=595.87 00:12:32.427 clat percentiles (usec): 00:12:32.427 | 1.00th=[ 498], 5.00th=[ 523], 10.00th=[ 537], 20.00th=[ 545], 00:12:32.427 | 30.00th=[ 553], 40.00th=[ 562], 50.00th=[ 570], 60.00th=[ 578], 00:12:32.427 | 70.00th=[ 586], 80.00th=[ 594], 90.00th=[ 635], 95.00th=[ 676], 00:12:32.427 | 99.00th=[ 742], 99.50th=[ 791], 99.90th=[ 865], 99.95th=[19530], 00:12:32.427 | 99.99th=[19530] 00:12:32.427 write: IOPS=1136, BW=4544KiB/s (4653kB/s)(4544KiB/1000msec); 0 zone resets 00:12:32.427 slat (usec): min=9, max=3299, avg=14.30, stdev=97.58 00:12:32.427 clat (usec): min=240, max=1762, avg=317.04, stdev=127.64 00:12:32.427 lat (usec): min=256, max=3923, avg=331.34, stdev=166.44 00:12:32.427 clat percentiles (usec): 00:12:32.427 | 1.00th=[ 249], 5.00th=[ 251], 10.00th=[ 253], 20.00th=[ 258], 00:12:32.427 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 277], 00:12:32.427 | 70.00th=[ 302], 80.00th=[ 326], 90.00th=[ 453], 95.00th=[ 570], 00:12:32.427 | 99.00th=[ 734], 99.50th=[ 979], 99.90th=[ 1549], 99.95th=[ 1762], 00:12:32.427 | 99.99th=[ 1762] 00:12:32.427 bw ( KiB/s): min= 4440, max= 4440, per=25.40%, avg=4440.00, stdev= 0.00, samples=1 00:12:32.427 iops : min= 1110, max= 1110, avg=1110.00, stdev= 0.00, samples=1 00:12:32.427 lat (usec) : 250=1.34%, 500=48.33%, 750=49.44%, 1000=0.60% 00:12:32.427 lat (msec) : 2=0.23%, 20=0.05% 00:12:32.427 cpu : usr=1.10%, sys=2.20%, ctx=2162, majf=0, minf=1 00:12:32.427 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:32.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.427 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:32.427 issued rwts: total=1024,1136,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:32.427 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:32.427 00:12:32.427 Run status group 0 (all jobs): 00:12:32.427 READ: bw=13.4MiB/s (14.0MB/s), 1686KiB/s-4096KiB/s (1727kB/s-4194kB/s), io=13.7MiB (14.3MB), run=1000-1021msec 00:12:32.427 WRITE: bw=17.1MiB/s (17.9MB/s), 2046KiB/s-6018KiB/s (2095kB/s-6162kB/s), io=17.4MiB (18.3MB), run=1000-1021msec 00:12:32.427 00:12:32.427 Disk stats (read/write): 00:12:32.427 nvme0n1: ios=390/512, merge=0/0, ticks=731/190, in_queue=921, util=87.07% 00:12:32.427 nvme0n2: ios=1071/1143, merge=0/0, ticks=597/325, in_queue=922, util=91.16% 00:12:32.427 nvme0n3: ios=961/1024, merge=0/0, ticks=1382/322, in_queue=1704, util=93.64% 00:12:32.427 nvme0n4: ios=975/1024, merge=0/0, ticks=915/305, in_queue=1220, util=94.64% 00:12:32.427 11:00:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:32.427 [global] 00:12:32.427 thread=1 00:12:32.427 invalidate=1 00:12:32.427 rw=randwrite 00:12:32.427 time_based=1 00:12:32.427 runtime=1 00:12:32.427 ioengine=libaio 00:12:32.427 direct=1 00:12:32.427 bs=4096 00:12:32.427 iodepth=1 00:12:32.427 norandommap=0 00:12:32.427 numjobs=1 00:12:32.427 00:12:32.427 verify_dump=1 00:12:32.427 verify_backlog=512 00:12:32.427 verify_state_save=0 00:12:32.427 do_verify=1 00:12:32.427 verify=crc32c-intel 00:12:32.427 [job0] 00:12:32.427 filename=/dev/nvme0n1 00:12:32.427 [job1] 00:12:32.427 filename=/dev/nvme0n2 00:12:32.427 [job2] 00:12:32.427 filename=/dev/nvme0n3 00:12:32.427 [job3] 00:12:32.427 filename=/dev/nvme0n4 00:12:32.427 Could not set queue depth (nvme0n1) 00:12:32.427 Could not set queue depth (nvme0n2) 00:12:32.427 Could not set queue depth (nvme0n3) 00:12:32.427 Could not set queue depth (nvme0n4) 00:12:32.687 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:32.687 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:32.687 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:32.687 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:32.687 fio-3.35 00:12:32.687 Starting 4 threads 00:12:34.098 00:12:34.098 job0: (groupid=0, jobs=1): err= 0: pid=1361155: Fri Jul 26 11:00:53 2024 00:12:34.098 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:12:34.098 slat (nsec): min=7183, max=24162, avg=8207.19, stdev=1106.93 00:12:34.098 clat (usec): min=364, max=862, avg=520.40, stdev=100.90 00:12:34.098 lat (usec): min=372, max=871, avg=528.61, stdev=100.94 00:12:34.098 clat percentiles (usec): 00:12:34.098 | 1.00th=[ 379], 5.00th=[ 408], 10.00th=[ 445], 20.00th=[ 465], 00:12:34.098 | 30.00th=[ 478], 40.00th=[ 482], 50.00th=[ 490], 60.00th=[ 494], 00:12:34.098 | 70.00th=[ 510], 80.00th=[ 553], 90.00th=[ 701], 95.00th=[ 791], 00:12:34.098 | 99.00th=[ 816], 99.50th=[ 816], 99.90th=[ 832], 99.95th=[ 865], 00:12:34.098 | 99.99th=[ 865] 00:12:34.098 write: IOPS=1444, BW=5778KiB/s (5917kB/s)(5784KiB/1001msec); 0 zone resets 00:12:34.098 slat (nsec): min=10594, max=51769, avg=12274.33, stdev=2057.74 00:12:34.098 clat (usec): min=249, max=861, avg=299.58, stdev=88.84 00:12:34.098 lat (usec): min=260, max=912, avg=311.86, stdev=89.24 00:12:34.098 clat percentiles (usec): 00:12:34.098 | 1.00th=[ 251], 5.00th=[ 253], 10.00th=[ 255], 20.00th=[ 258], 00:12:34.098 | 30.00th=[ 260], 40.00th=[ 262], 50.00th=[ 265], 60.00th=[ 269], 00:12:34.098 | 70.00th=[ 273], 80.00th=[ 318], 90.00th=[ 424], 95.00th=[ 537], 00:12:34.098 | 99.00th=[ 693], 99.50th=[ 701], 99.90th=[ 783], 99.95th=[ 865], 00:12:34.098 | 99.99th=[ 865] 00:12:34.098 bw ( KiB/s): min= 4640, max= 4640, per=39.68%, avg=4640.00, stdev= 0.00, samples=1 00:12:34.098 iops : min= 1160, max= 1160, avg=1160.00, stdev= 0.00, samples=1 00:12:34.098 lat (usec) : 250=0.12%, 500=81.90%, 750=14.78%, 1000=3.20% 00:12:34.098 cpu : usr=3.40%, sys=2.90%, ctx=2471, majf=0, minf=1 00:12:34.098 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:34.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:34.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:34.098 issued rwts: total=1024,1446,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:34.098 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:34.098 job1: (groupid=0, jobs=1): err= 0: pid=1361156: Fri Jul 26 11:00:53 2024 00:12:34.098 read: IOPS=19, BW=78.7KiB/s (80.6kB/s)(80.0KiB/1016msec) 00:12:34.098 slat (nsec): min=10662, max=24771, avg=22360.00, stdev=3033.84 00:12:34.098 clat (usec): min=41775, max=42177, avg=41967.04, stdev=117.04 00:12:34.098 lat (usec): min=41799, max=42199, avg=41989.40, stdev=117.42 00:12:34.098 clat percentiles (usec): 00:12:34.098 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:12:34.098 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:12:34.098 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:12:34.098 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:34.098 | 99.99th=[42206] 00:12:34.098 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:12:34.098 slat (nsec): min=10920, max=40758, avg=12799.13, stdev=3042.98 00:12:34.098 clat (usec): min=228, max=992, avg=327.02, stdev=134.26 00:12:34.098 lat (usec): min=261, max=1032, avg=339.82, stdev=135.39 00:12:34.098 clat percentiles (usec): 00:12:34.098 | 1.00th=[ 251], 5.00th=[ 253], 10.00th=[ 255], 20.00th=[ 260], 00:12:34.098 | 30.00th=[ 262], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 285], 00:12:34.098 | 70.00th=[ 302], 80.00th=[ 338], 90.00th=[ 502], 95.00th=[ 701], 00:12:34.098 | 99.00th=[ 807], 99.50th=[ 824], 99.90th=[ 996], 99.95th=[ 996], 00:12:34.098 | 99.99th=[ 996] 00:12:34.098 bw ( KiB/s): min= 4096, max= 4096, per=35.03%, avg=4096.00, stdev= 0.00, samples=1 00:12:34.098 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:34.098 lat (usec) : 250=0.38%, 500=86.09%, 750=5.83%, 1000=3.95% 00:12:34.098 lat (msec) : 50=3.76% 00:12:34.098 cpu : usr=0.49%, sys=0.99%, ctx=533, majf=0, minf=2 00:12:34.098 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:34.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:34.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:34.098 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:34.098 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:34.098 job2: (groupid=0, jobs=1): err= 0: pid=1361157: Fri Jul 26 11:00:53 2024 00:12:34.098 read: IOPS=19, BW=78.4KiB/s (80.3kB/s)(80.0KiB/1020msec) 00:12:34.098 slat (nsec): min=10475, max=24474, avg=21525.95, stdev=2825.47 00:12:34.098 clat (usec): min=41264, max=42663, avg=41941.87, stdev=271.53 00:12:34.098 lat (usec): min=41288, max=42684, avg=41963.40, stdev=272.14 00:12:34.098 clat percentiles (usec): 00:12:34.098 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:12:34.098 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:12:34.098 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:12:34.098 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:12:34.098 | 99.99th=[42730] 00:12:34.098 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:12:34.098 slat (nsec): min=10612, max=36189, avg=12968.83, stdev=4404.55 00:12:34.098 clat (usec): min=250, max=1069, avg=335.75, stdev=179.55 00:12:34.098 lat (usec): min=260, max=1097, avg=348.72, stdev=183.17 00:12:34.098 clat percentiles (usec): 00:12:34.098 | 1.00th=[ 251], 5.00th=[ 253], 10.00th=[ 255], 20.00th=[ 258], 00:12:34.098 | 30.00th=[ 260], 40.00th=[ 262], 50.00th=[ 265], 60.00th=[ 265], 00:12:34.098 | 70.00th=[ 273], 80.00th=[ 289], 90.00th=[ 693], 95.00th=[ 848], 00:12:34.098 | 99.00th=[ 914], 99.50th=[ 1029], 99.90th=[ 1074], 99.95th=[ 1074], 00:12:34.098 | 99.99th=[ 1074] 00:12:34.098 bw ( KiB/s): min= 4096, max= 4096, per=35.03%, avg=4096.00, stdev= 0.00, samples=1 00:12:34.098 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:34.098 lat (usec) : 500=84.77%, 750=2.63%, 1000=8.27% 00:12:34.098 lat (msec) : 2=0.56%, 50=3.76% 00:12:34.098 cpu : usr=0.69%, sys=0.79%, ctx=532, majf=0, minf=1 00:12:34.098 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:34.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:34.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:34.098 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:34.098 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:34.098 job3: (groupid=0, jobs=1): err= 0: pid=1361158: Fri Jul 26 11:00:53 2024 00:12:34.098 read: IOPS=18, BW=75.5KiB/s (77.4kB/s)(76.0KiB/1006msec) 00:12:34.098 slat (nsec): min=9395, max=23667, avg=19763.89, stdev=4557.86 00:12:34.098 clat (usec): min=41551, max=42204, avg=41969.71, stdev=146.28 00:12:34.098 lat (usec): min=41573, max=42213, avg=41989.48, stdev=144.12 00:12:34.098 clat percentiles (usec): 00:12:34.098 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:12:34.098 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:12:34.098 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:12:34.098 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:34.098 | 99.99th=[42206] 00:12:34.098 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:12:34.098 slat (nsec): min=8906, max=63114, avg=11395.90, stdev=4375.34 00:12:34.098 clat (usec): min=248, max=1729, avg=391.99, stdev=190.66 00:12:34.098 lat (usec): min=258, max=1740, avg=403.38, stdev=192.51 00:12:34.098 clat percentiles (usec): 00:12:34.098 | 1.00th=[ 251], 5.00th=[ 251], 10.00th=[ 253], 20.00th=[ 258], 00:12:34.098 | 30.00th=[ 262], 40.00th=[ 269], 50.00th=[ 285], 60.00th=[ 334], 00:12:34.098 | 70.00th=[ 416], 80.00th=[ 537], 90.00th=[ 709], 95.00th=[ 783], 00:12:34.098 | 99.00th=[ 873], 99.50th=[ 922], 99.90th=[ 1729], 99.95th=[ 1729], 00:12:34.098 | 99.99th=[ 1729] 00:12:34.098 bw ( KiB/s): min= 4096, max= 4096, per=35.03%, avg=4096.00, stdev= 0.00, samples=1 00:12:34.098 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:34.098 lat (usec) : 250=0.94%, 500=73.82%, 750=12.81%, 1000=8.47% 00:12:34.098 lat (msec) : 2=0.38%, 50=3.58% 00:12:34.098 cpu : usr=0.30%, sys=0.50%, ctx=531, majf=0, minf=1 00:12:34.098 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:34.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:34.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:34.098 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:34.098 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:34.098 00:12:34.098 Run status group 0 (all jobs): 00:12:34.098 READ: bw=4247KiB/s (4349kB/s), 75.5KiB/s-4092KiB/s (77.4kB/s-4190kB/s), io=4332KiB (4436kB), run=1001-1020msec 00:12:34.098 WRITE: bw=11.4MiB/s (12.0MB/s), 2008KiB/s-5778KiB/s (2056kB/s-5917kB/s), io=11.6MiB (12.2MB), run=1001-1020msec 00:12:34.098 00:12:34.098 Disk stats (read/write): 00:12:34.098 nvme0n1: ios=999/1024, merge=0/0, ticks=1473/307, in_queue=1780, util=98.60% 00:12:34.098 nvme0n2: ios=66/512, merge=0/0, ticks=894/159, in_queue=1053, util=99.19% 00:12:34.099 nvme0n3: ios=16/512, merge=0/0, ticks=671/167, in_queue=838, util=89.09% 00:12:34.099 nvme0n4: ios=70/512, merge=0/0, ticks=773/194, in_queue=967, util=93.93% 00:12:34.099 11:00:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:34.099 [global] 00:12:34.099 thread=1 00:12:34.099 invalidate=1 00:12:34.099 rw=write 00:12:34.099 time_based=1 00:12:34.099 runtime=1 00:12:34.099 ioengine=libaio 00:12:34.099 direct=1 00:12:34.099 bs=4096 00:12:34.099 iodepth=128 00:12:34.099 norandommap=0 00:12:34.099 numjobs=1 00:12:34.099 00:12:34.099 verify_dump=1 00:12:34.099 verify_backlog=512 00:12:34.099 verify_state_save=0 00:12:34.099 do_verify=1 00:12:34.099 verify=crc32c-intel 00:12:34.099 [job0] 00:12:34.099 filename=/dev/nvme0n1 00:12:34.099 [job1] 00:12:34.099 filename=/dev/nvme0n2 00:12:34.099 [job2] 00:12:34.099 filename=/dev/nvme0n3 00:12:34.099 [job3] 00:12:34.099 filename=/dev/nvme0n4 00:12:34.099 Could not set queue depth (nvme0n1) 00:12:34.099 Could not set queue depth (nvme0n2) 00:12:34.099 Could not set queue depth (nvme0n3) 00:12:34.099 Could not set queue depth (nvme0n4) 00:12:34.356 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:34.356 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:34.356 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:34.356 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:34.356 fio-3.35 00:12:34.356 Starting 4 threads 00:12:35.730 00:12:35.730 job0: (groupid=0, jobs=1): err= 0: pid=1361528: Fri Jul 26 11:00:54 2024 00:12:35.730 read: IOPS=3345, BW=13.1MiB/s (13.7MB/s)(13.2MiB/1007msec) 00:12:35.730 slat (nsec): min=1482, max=24556k, avg=140237.09, stdev=911675.05 00:12:35.730 clat (usec): min=2031, max=65750, avg=17582.85, stdev=8192.41 00:12:35.730 lat (usec): min=2552, max=65755, avg=17723.08, stdev=8260.79 00:12:35.730 clat percentiles (usec): 00:12:35.730 | 1.00th=[ 5473], 5.00th=[ 8586], 10.00th=[10159], 20.00th=[11469], 00:12:35.730 | 30.00th=[12780], 40.00th=[14877], 50.00th=[16188], 60.00th=[17957], 00:12:35.730 | 70.00th=[20317], 80.00th=[22152], 90.00th=[25297], 95.00th=[28705], 00:12:35.730 | 99.00th=[55313], 99.50th=[61080], 99.90th=[64226], 99.95th=[65799], 00:12:35.730 | 99.99th=[65799] 00:12:35.730 write: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec); 0 zone resets 00:12:35.730 slat (usec): min=2, max=12281, avg=141.68, stdev=642.36 00:12:35.730 clat (usec): min=1074, max=65751, avg=19097.27, stdev=9021.93 00:12:35.730 lat (usec): min=1088, max=65757, avg=19238.95, stdev=9058.04 00:12:35.730 clat percentiles (usec): 00:12:35.730 | 1.00th=[ 5604], 5.00th=[ 9110], 10.00th=[11207], 20.00th=[12518], 00:12:35.730 | 30.00th=[13698], 40.00th=[14877], 50.00th=[16319], 60.00th=[19006], 00:12:35.730 | 70.00th=[21890], 80.00th=[24249], 90.00th=[27657], 95.00th=[42206], 00:12:35.730 | 99.00th=[48497], 99.50th=[51643], 99.90th=[65799], 99.95th=[65799], 00:12:35.730 | 99.99th=[65799] 00:12:35.730 bw ( KiB/s): min=12960, max=15712, per=25.45%, avg=14336.00, stdev=1945.96, samples=2 00:12:35.730 iops : min= 3240, max= 3928, avg=3584.00, stdev=486.49, samples=2 00:12:35.730 lat (msec) : 2=0.06%, 4=0.33%, 10=7.45%, 20=58.08%, 50=32.99% 00:12:35.730 lat (msec) : 100=1.09% 00:12:35.730 cpu : usr=3.08%, sys=2.58%, ctx=551, majf=0, minf=1 00:12:35.730 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:35.730 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.730 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:35.730 issued rwts: total=3369,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.730 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:35.730 job1: (groupid=0, jobs=1): err= 0: pid=1361529: Fri Jul 26 11:00:54 2024 00:12:35.730 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:12:35.730 slat (nsec): min=1107, max=14880k, avg=132159.06, stdev=908588.70 00:12:35.730 clat (usec): min=3465, max=49398, avg=17104.71, stdev=7193.09 00:12:35.730 lat (usec): min=3472, max=49413, avg=17236.87, stdev=7235.30 00:12:35.730 clat percentiles (usec): 00:12:35.730 | 1.00th=[ 6128], 5.00th=[ 8094], 10.00th=[10421], 20.00th=[12256], 00:12:35.730 | 30.00th=[12911], 40.00th=[14091], 50.00th=[15401], 60.00th=[17433], 00:12:35.730 | 70.00th=[19268], 80.00th=[21627], 90.00th=[24249], 95.00th=[29492], 00:12:35.730 | 99.00th=[47449], 99.50th=[49546], 99.90th=[49546], 99.95th=[49546], 00:12:35.730 | 99.99th=[49546] 00:12:35.730 write: IOPS=3364, BW=13.1MiB/s (13.8MB/s)(13.2MiB/1004msec); 0 zone resets 00:12:35.730 slat (usec): min=2, max=13989, avg=159.32, stdev=933.81 00:12:35.730 clat (usec): min=2326, max=44144, avg=22153.34, stdev=6299.07 00:12:35.730 lat (usec): min=2383, max=44168, avg=22312.66, stdev=6323.21 00:12:35.730 clat percentiles (usec): 00:12:35.730 | 1.00th=[ 9372], 5.00th=[11731], 10.00th=[13173], 20.00th=[16909], 00:12:35.730 | 30.00th=[19530], 40.00th=[20841], 50.00th=[22152], 60.00th=[23725], 00:12:35.730 | 70.00th=[25297], 80.00th=[27132], 90.00th=[29754], 95.00th=[32637], 00:12:35.730 | 99.00th=[38011], 99.50th=[41157], 99.90th=[44303], 99.95th=[44303], 00:12:35.730 | 99.99th=[44303] 00:12:35.730 bw ( KiB/s): min=12288, max=13720, per=23.09%, avg=13004.00, stdev=1012.58, samples=2 00:12:35.730 iops : min= 3072, max= 3430, avg=3251.00, stdev=253.14, samples=2 00:12:35.730 lat (msec) : 4=0.50%, 10=4.70%, 20=50.29%, 50=44.51% 00:12:35.730 cpu : usr=2.49%, sys=3.29%, ctx=448, majf=0, minf=1 00:12:35.730 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:12:35.730 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.730 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:35.730 issued rwts: total=3072,3378,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.730 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:35.730 job2: (groupid=0, jobs=1): err= 0: pid=1361531: Fri Jul 26 11:00:54 2024 00:12:35.730 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:12:35.730 slat (nsec): min=1109, max=40413k, avg=119370.35, stdev=961808.22 00:12:35.730 clat (usec): min=5225, max=50158, avg=16135.10, stdev=6838.37 00:12:35.730 lat (usec): min=5229, max=50163, avg=16254.47, stdev=6856.59 00:12:35.730 clat percentiles (usec): 00:12:35.730 | 1.00th=[ 8586], 5.00th=[ 9372], 10.00th=[10945], 20.00th=[11600], 00:12:35.730 | 30.00th=[12518], 40.00th=[13304], 50.00th=[14746], 60.00th=[16188], 00:12:35.730 | 70.00th=[17433], 80.00th=[19268], 90.00th=[21103], 95.00th=[24773], 00:12:35.731 | 99.00th=[46400], 99.50th=[47449], 99.90th=[50070], 99.95th=[50070], 00:12:35.731 | 99.99th=[50070] 00:12:35.731 write: IOPS=3626, BW=14.2MiB/s (14.9MB/s)(14.2MiB/1006msec); 0 zone resets 00:12:35.731 slat (nsec): min=1975, max=10742k, avg=136609.82, stdev=616481.50 00:12:35.731 clat (usec): min=1720, max=54567, avg=19134.56, stdev=9317.83 00:12:35.731 lat (usec): min=2389, max=54583, avg=19271.17, stdev=9358.50 00:12:35.731 clat percentiles (usec): 00:12:35.731 | 1.00th=[ 4948], 5.00th=[ 7373], 10.00th=[ 8848], 20.00th=[11207], 00:12:35.731 | 30.00th=[13566], 40.00th=[16057], 50.00th=[18482], 60.00th=[20841], 00:12:35.731 | 70.00th=[22676], 80.00th=[23725], 90.00th=[30278], 95.00th=[38536], 00:12:35.731 | 99.00th=[52167], 99.50th=[53740], 99.90th=[54789], 99.95th=[54789], 00:12:35.731 | 99.99th=[54789] 00:12:35.731 bw ( KiB/s): min=12960, max=15768, per=25.50%, avg=14364.00, stdev=1985.56, samples=2 00:12:35.731 iops : min= 3240, max= 3942, avg=3591.00, stdev=496.39, samples=2 00:12:35.731 lat (msec) : 2=0.01%, 4=0.22%, 10=10.59%, 20=58.99%, 50=29.37% 00:12:35.731 lat (msec) : 100=0.82% 00:12:35.731 cpu : usr=2.19%, sys=2.59%, ctx=687, majf=0, minf=1 00:12:35.731 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:12:35.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.731 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:35.731 issued rwts: total=3584,3648,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.731 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:35.731 job3: (groupid=0, jobs=1): err= 0: pid=1361532: Fri Jul 26 11:00:54 2024 00:12:35.731 read: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1011msec) 00:12:35.731 slat (nsec): min=1407, max=32637k, avg=136065.17, stdev=969449.08 00:12:35.731 clat (usec): min=6866, max=58288, avg=17595.61, stdev=8017.46 00:12:35.731 lat (usec): min=6869, max=58292, avg=17731.68, stdev=8054.79 00:12:35.731 clat percentiles (usec): 00:12:35.731 | 1.00th=[ 7373], 5.00th=[ 9372], 10.00th=[10945], 20.00th=[11731], 00:12:35.731 | 30.00th=[12387], 40.00th=[14222], 50.00th=[15926], 60.00th=[17695], 00:12:35.731 | 70.00th=[19268], 80.00th=[22414], 90.00th=[26870], 95.00th=[29230], 00:12:35.731 | 99.00th=[53216], 99.50th=[54789], 99.90th=[57410], 99.95th=[58459], 00:12:35.731 | 99.99th=[58459] 00:12:35.731 write: IOPS=3585, BW=14.0MiB/s (14.7MB/s)(14.2MiB/1011msec); 0 zone resets 00:12:35.731 slat (usec): min=2, max=11509, avg=139.11, stdev=696.19 00:12:35.731 clat (usec): min=2723, max=58851, avg=17954.64, stdev=6579.55 00:12:35.731 lat (usec): min=4346, max=58855, avg=18093.75, stdev=6596.74 00:12:35.731 clat percentiles (usec): 00:12:35.731 | 1.00th=[ 6194], 5.00th=[ 8717], 10.00th=[10552], 20.00th=[11863], 00:12:35.731 | 30.00th=[13435], 40.00th=[15008], 50.00th=[17433], 60.00th=[19006], 00:12:35.731 | 70.00th=[21103], 80.00th=[25035], 90.00th=[27132], 95.00th=[28181], 00:12:35.731 | 99.00th=[32113], 99.50th=[32113], 99.90th=[58983], 99.95th=[58983], 00:12:35.731 | 99.99th=[58983] 00:12:35.731 bw ( KiB/s): min=13376, max=15296, per=25.45%, avg=14336.00, stdev=1357.65, samples=2 00:12:35.731 iops : min= 3344, max= 3824, avg=3584.00, stdev=339.41, samples=2 00:12:35.731 lat (msec) : 4=0.01%, 10=6.98%, 20=61.85%, 50=30.28%, 100=0.87% 00:12:35.731 cpu : usr=2.38%, sys=2.97%, ctx=534, majf=0, minf=1 00:12:35.731 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:12:35.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.731 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:35.731 issued rwts: total=3584,3625,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.731 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:35.731 00:12:35.731 Run status group 0 (all jobs): 00:12:35.731 READ: bw=52.6MiB/s (55.1MB/s), 12.0MiB/s-13.9MiB/s (12.5MB/s-14.6MB/s), io=53.2MiB (55.7MB), run=1004-1011msec 00:12:35.731 WRITE: bw=55.0MiB/s (57.7MB/s), 13.1MiB/s-14.2MiB/s (13.8MB/s-14.9MB/s), io=55.6MiB (58.3MB), run=1004-1011msec 00:12:35.731 00:12:35.731 Disk stats (read/write): 00:12:35.731 nvme0n1: ios=2586/3071, merge=0/0, ticks=44154/55809, in_queue=99963, util=98.00% 00:12:35.731 nvme0n2: ios=2600/2788, merge=0/0, ticks=39466/48846, in_queue=88312, util=98.37% 00:12:35.731 nvme0n3: ios=2906/3072, merge=0/0, ticks=47550/58002, in_queue=105552, util=99.27% 00:12:35.731 nvme0n4: ios=2957/3072, merge=0/0, ticks=53684/51547, in_queue=105231, util=97.05% 00:12:35.731 11:00:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:35.731 [global] 00:12:35.731 thread=1 00:12:35.731 invalidate=1 00:12:35.731 rw=randwrite 00:12:35.731 time_based=1 00:12:35.731 runtime=1 00:12:35.731 ioengine=libaio 00:12:35.731 direct=1 00:12:35.731 bs=4096 00:12:35.731 iodepth=128 00:12:35.731 norandommap=0 00:12:35.731 numjobs=1 00:12:35.731 00:12:35.731 verify_dump=1 00:12:35.731 verify_backlog=512 00:12:35.731 verify_state_save=0 00:12:35.731 do_verify=1 00:12:35.731 verify=crc32c-intel 00:12:35.731 [job0] 00:12:35.731 filename=/dev/nvme0n1 00:12:35.731 [job1] 00:12:35.731 filename=/dev/nvme0n2 00:12:35.731 [job2] 00:12:35.731 filename=/dev/nvme0n3 00:12:35.731 [job3] 00:12:35.731 filename=/dev/nvme0n4 00:12:35.731 Could not set queue depth (nvme0n1) 00:12:35.731 Could not set queue depth (nvme0n2) 00:12:35.731 Could not set queue depth (nvme0n3) 00:12:35.731 Could not set queue depth (nvme0n4) 00:12:35.731 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:35.731 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:35.731 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:35.731 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:35.731 fio-3.35 00:12:35.731 Starting 4 threads 00:12:37.106 00:12:37.106 job0: (groupid=0, jobs=1): err= 0: pid=1361900: Fri Jul 26 11:00:56 2024 00:12:37.106 read: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.1MiB/1012msec) 00:12:37.106 slat (nsec): min=1426, max=11563k, avg=121912.29, stdev=779210.19 00:12:37.106 clat (usec): min=5857, max=48415, avg=15872.48, stdev=7208.55 00:12:37.106 lat (usec): min=5863, max=48421, avg=15994.39, stdev=7265.30 00:12:37.106 clat percentiles (usec): 00:12:37.106 | 1.00th=[ 6980], 5.00th=[ 7701], 10.00th=[ 8586], 20.00th=[ 9765], 00:12:37.106 | 30.00th=[11338], 40.00th=[12649], 50.00th=[13698], 60.00th=[15664], 00:12:37.106 | 70.00th=[17957], 80.00th=[20579], 90.00th=[25035], 95.00th=[30540], 00:12:37.106 | 99.00th=[39584], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:12:37.106 | 99.99th=[48497] 00:12:37.106 write: IOPS=4047, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1012msec); 0 zone resets 00:12:37.106 slat (usec): min=2, max=11184, avg=120.00, stdev=652.79 00:12:37.106 clat (usec): min=1316, max=49020, avg=17397.71, stdev=8017.16 00:12:37.106 lat (usec): min=1585, max=50982, avg=17517.70, stdev=8041.44 00:12:37.106 clat percentiles (usec): 00:12:37.106 | 1.00th=[ 3949], 5.00th=[ 7439], 10.00th=[ 9110], 20.00th=[11731], 00:12:37.106 | 30.00th=[13304], 40.00th=[14353], 50.00th=[15139], 60.00th=[16712], 00:12:37.106 | 70.00th=[18482], 80.00th=[22676], 90.00th=[29492], 95.00th=[34341], 00:12:37.106 | 99.00th=[42730], 99.50th=[43779], 99.90th=[48497], 99.95th=[49021], 00:12:37.106 | 99.99th=[49021] 00:12:37.106 bw ( KiB/s): min=15536, max=16328, per=28.68%, avg=15932.00, stdev=560.03, samples=2 00:12:37.106 iops : min= 3884, max= 4082, avg=3983.00, stdev=140.01, samples=2 00:12:37.106 lat (msec) : 2=0.05%, 4=0.48%, 10=16.77%, 20=58.51%, 50=24.19% 00:12:37.106 cpu : usr=3.26%, sys=3.46%, ctx=550, majf=0, minf=1 00:12:37.106 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:37.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.106 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:37.106 issued rwts: total=3598,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.106 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:37.106 job1: (groupid=0, jobs=1): err= 0: pid=1361901: Fri Jul 26 11:00:56 2024 00:12:37.106 read: IOPS=3520, BW=13.8MiB/s (14.4MB/s)(14.0MiB/1018msec) 00:12:37.106 slat (nsec): min=1107, max=32997k, avg=117648.25, stdev=891359.94 00:12:37.106 clat (usec): min=3864, max=59346, avg=15724.11, stdev=9963.86 00:12:37.106 lat (usec): min=3870, max=65011, avg=15841.76, stdev=10030.39 00:12:37.106 clat percentiles (usec): 00:12:37.106 | 1.00th=[ 6652], 5.00th=[ 7242], 10.00th=[ 8225], 20.00th=[ 8979], 00:12:37.106 | 30.00th=[10028], 40.00th=[10814], 50.00th=[11863], 60.00th=[13566], 00:12:37.106 | 70.00th=[16057], 80.00th=[21890], 90.00th=[28443], 95.00th=[35390], 00:12:37.106 | 99.00th=[54789], 99.50th=[58983], 99.90th=[59507], 99.95th=[59507], 00:12:37.106 | 99.99th=[59507] 00:12:37.106 write: IOPS=3823, BW=14.9MiB/s (15.7MB/s)(15.2MiB/1018msec); 0 zone resets 00:12:37.106 slat (usec): min=2, max=8855, avg=124.65, stdev=549.01 00:12:37.106 clat (usec): min=360, max=65005, avg=18638.70, stdev=11981.01 00:12:37.106 lat (usec): min=783, max=65008, avg=18763.36, stdev=12051.32 00:12:37.106 clat percentiles (usec): 00:12:37.106 | 1.00th=[ 1663], 5.00th=[ 4555], 10.00th=[ 6652], 20.00th=[ 9110], 00:12:37.106 | 30.00th=[10290], 40.00th=[12256], 50.00th=[13304], 60.00th=[17957], 00:12:37.106 | 70.00th=[26346], 80.00th=[29492], 90.00th=[37487], 95.00th=[40633], 00:12:37.106 | 99.00th=[50594], 99.50th=[54789], 99.90th=[60556], 99.95th=[62129], 00:12:37.106 | 99.99th=[64750] 00:12:37.106 bw ( KiB/s): min=10592, max=19520, per=27.11%, avg=15056.00, stdev=6313.05, samples=2 00:12:37.106 iops : min= 2648, max= 4880, avg=3764.00, stdev=1578.26, samples=2 00:12:37.106 lat (usec) : 500=0.01%, 1000=0.07% 00:12:37.106 lat (msec) : 2=0.98%, 4=1.24%, 10=26.54%, 20=41.01%, 50=28.60% 00:12:37.106 lat (msec) : 100=1.55% 00:12:37.106 cpu : usr=1.67%, sys=4.23%, ctx=595, majf=0, minf=1 00:12:37.106 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:37.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.106 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:37.106 issued rwts: total=3584,3892,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.106 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:37.106 job2: (groupid=0, jobs=1): err= 0: pid=1361903: Fri Jul 26 11:00:56 2024 00:12:37.106 read: IOPS=2490, BW=9961KiB/s (10.2MB/s)(10.0MiB/1028msec) 00:12:37.106 slat (nsec): min=1538, max=15710k, avg=171842.70, stdev=987675.95 00:12:37.106 clat (usec): min=6264, max=53893, avg=21076.48, stdev=8211.73 00:12:37.106 lat (usec): min=9047, max=53900, avg=21248.32, stdev=8274.18 00:12:37.106 clat percentiles (usec): 00:12:37.106 | 1.00th=[ 9110], 5.00th=[ 9765], 10.00th=[ 9896], 20.00th=[13435], 00:12:37.106 | 30.00th=[16450], 40.00th=[17695], 50.00th=[20841], 60.00th=[23725], 00:12:37.106 | 70.00th=[25035], 80.00th=[26870], 90.00th=[30016], 95.00th=[33162], 00:12:37.106 | 99.00th=[49546], 99.50th=[52691], 99.90th=[53740], 99.95th=[53740], 00:12:37.106 | 99.99th=[53740] 00:12:37.106 write: IOPS=2728, BW=10.7MiB/s (11.2MB/s)(11.0MiB/1028msec); 0 zone resets 00:12:37.106 slat (usec): min=2, max=13178, avg=191.54, stdev=890.17 00:12:37.106 clat (usec): min=3866, max=57670, avg=27188.47, stdev=12853.75 00:12:37.106 lat (usec): min=3875, max=59466, avg=27380.02, stdev=12920.22 00:12:37.106 clat percentiles (usec): 00:12:37.106 | 1.00th=[ 6456], 5.00th=[ 9503], 10.00th=[11076], 20.00th=[13829], 00:12:37.106 | 30.00th=[19006], 40.00th=[21890], 50.00th=[25560], 60.00th=[29492], 00:12:37.106 | 70.00th=[34341], 80.00th=[40109], 90.00th=[45351], 95.00th=[50070], 00:12:37.106 | 99.00th=[54264], 99.50th=[55837], 99.90th=[57410], 99.95th=[57410], 00:12:37.106 | 99.99th=[57410] 00:12:37.106 bw ( KiB/s): min= 9136, max=12288, per=19.29%, avg=10712.00, stdev=2228.80, samples=2 00:12:37.106 iops : min= 2284, max= 3072, avg=2678.00, stdev=557.20, samples=2 00:12:37.106 lat (msec) : 4=0.22%, 10=8.74%, 20=31.48%, 50=56.70%, 100=2.85% 00:12:37.106 cpu : usr=2.53%, sys=2.14%, ctx=487, majf=0, minf=1 00:12:37.106 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:37.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.106 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:37.106 issued rwts: total=2560,2805,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.106 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:37.106 job3: (groupid=0, jobs=1): err= 0: pid=1361904: Fri Jul 26 11:00:56 2024 00:12:37.106 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec) 00:12:37.106 slat (nsec): min=1564, max=16382k, avg=124948.64, stdev=782389.66 00:12:37.106 clat (usec): min=5286, max=49427, avg=15917.12, stdev=7905.88 00:12:37.106 lat (usec): min=5293, max=49436, avg=16042.07, stdev=7964.52 00:12:37.106 clat percentiles (usec): 00:12:37.106 | 1.00th=[ 5932], 5.00th=[ 8225], 10.00th=[ 8586], 20.00th=[10159], 00:12:37.106 | 30.00th=[11338], 40.00th=[11994], 50.00th=[13173], 60.00th=[14877], 00:12:37.106 | 70.00th=[17171], 80.00th=[21890], 90.00th=[28181], 95.00th=[33424], 00:12:37.106 | 99.00th=[41681], 99.50th=[41681], 99.90th=[43779], 99.95th=[49021], 00:12:37.106 | 99.99th=[49546] 00:12:37.106 write: IOPS=3461, BW=13.5MiB/s (14.2MB/s)(13.6MiB/1006msec); 0 zone resets 00:12:37.106 slat (nsec): min=1922, max=11939k, avg=169845.50, stdev=702252.92 00:12:37.106 clat (usec): min=2306, max=54691, avg=22603.50, stdev=9539.54 00:12:37.107 lat (usec): min=3430, max=54713, avg=22773.34, stdev=9585.61 00:12:37.107 clat percentiles (usec): 00:12:37.107 | 1.00th=[ 5211], 5.00th=[ 7832], 10.00th=[ 9765], 20.00th=[14746], 00:12:37.107 | 30.00th=[16712], 40.00th=[20317], 50.00th=[22676], 60.00th=[25297], 00:12:37.107 | 70.00th=[26870], 80.00th=[28967], 90.00th=[33817], 95.00th=[41157], 00:12:37.107 | 99.00th=[48497], 99.50th=[52167], 99.90th=[54789], 99.95th=[54789], 00:12:37.107 | 99.99th=[54789] 00:12:37.107 bw ( KiB/s): min=13360, max=13472, per=24.15%, avg=13416.00, stdev=79.20, samples=2 00:12:37.107 iops : min= 3340, max= 3368, avg=3354.00, stdev=19.80, samples=2 00:12:37.107 lat (msec) : 4=0.09%, 10=14.54%, 20=42.26%, 50=42.69%, 100=0.41% 00:12:37.107 cpu : usr=2.99%, sys=1.89%, ctx=569, majf=0, minf=1 00:12:37.107 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:12:37.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.107 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:37.107 issued rwts: total=3072,3482,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.107 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:37.107 00:12:37.107 Run status group 0 (all jobs): 00:12:37.107 READ: bw=48.7MiB/s (51.1MB/s), 9961KiB/s-13.9MiB/s (10.2MB/s-14.6MB/s), io=50.1MiB (52.5MB), run=1006-1028msec 00:12:37.107 WRITE: bw=54.2MiB/s (56.9MB/s), 10.7MiB/s-15.8MiB/s (11.2MB/s-16.6MB/s), io=55.8MiB (58.5MB), run=1006-1028msec 00:12:37.107 00:12:37.107 Disk stats (read/write): 00:12:37.107 nvme0n1: ios=3122/3320, merge=0/0, ticks=44239/51026, in_queue=95265, util=89.18% 00:12:37.107 nvme0n2: ios=3103/3456, merge=0/0, ticks=46749/49326, in_queue=96075, util=96.96% 00:12:37.107 nvme0n3: ios=2070/2326, merge=0/0, ticks=46934/59098, in_queue=106032, util=98.96% 00:12:37.107 nvme0n4: ios=2791/3072, merge=0/0, ticks=24645/42375, in_queue=67020, util=89.86% 00:12:37.107 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:37.107 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1362135 00:12:37.107 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:37.107 11:00:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:37.107 [global] 00:12:37.107 thread=1 00:12:37.107 invalidate=1 00:12:37.107 rw=read 00:12:37.107 time_based=1 00:12:37.107 runtime=10 00:12:37.107 ioengine=libaio 00:12:37.107 direct=1 00:12:37.107 bs=4096 00:12:37.107 iodepth=1 00:12:37.107 norandommap=1 00:12:37.107 numjobs=1 00:12:37.107 00:12:37.107 [job0] 00:12:37.107 filename=/dev/nvme0n1 00:12:37.107 [job1] 00:12:37.107 filename=/dev/nvme0n2 00:12:37.107 [job2] 00:12:37.107 filename=/dev/nvme0n3 00:12:37.107 [job3] 00:12:37.107 filename=/dev/nvme0n4 00:12:37.107 Could not set queue depth (nvme0n1) 00:12:37.107 Could not set queue depth (nvme0n2) 00:12:37.107 Could not set queue depth (nvme0n3) 00:12:37.107 Could not set queue depth (nvme0n4) 00:12:37.365 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:37.365 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:37.365 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:37.365 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:37.365 fio-3.35 00:12:37.365 Starting 4 threads 00:12:40.654 11:00:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:40.654 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=22315008, buflen=4096 00:12:40.654 fio: pid=1362284, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:40.654 11:00:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:40.654 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=286720, buflen=4096 00:12:40.654 fio: pid=1362279, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:40.654 11:00:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:40.654 11:00:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:40.654 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=1687552, buflen=4096 00:12:40.654 fio: pid=1362275, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:40.654 11:00:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:40.655 11:00:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:40.914 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=22208512, buflen=4096 00:12:40.914 fio: pid=1362276, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:40.914 11:01:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:40.914 11:01:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:40.914 00:12:40.914 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1362275: Fri Jul 26 11:01:00 2024 00:12:40.914 read: IOPS=137, BW=548KiB/s (561kB/s)(1648KiB/3006msec) 00:12:40.914 slat (usec): min=2, max=6597, avg=26.03, stdev=324.37 00:12:40.914 clat (usec): min=472, max=42306, avg=7265.15, stdev=15036.76 00:12:40.914 lat (usec): min=479, max=42314, avg=7291.23, stdev=15040.14 00:12:40.914 clat percentiles (usec): 00:12:40.914 | 1.00th=[ 486], 5.00th=[ 498], 10.00th=[ 510], 20.00th=[ 570], 00:12:40.914 | 30.00th=[ 603], 40.00th=[ 660], 50.00th=[ 734], 60.00th=[ 824], 00:12:40.914 | 70.00th=[ 988], 80.00th=[ 1270], 90.00th=[42206], 95.00th=[42206], 00:12:40.914 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:40.914 | 99.99th=[42206] 00:12:40.914 bw ( KiB/s): min= 96, max= 528, per=1.85%, avg=260.80, stdev=226.10, samples=5 00:12:40.914 iops : min= 24, max= 132, avg=65.20, stdev=56.53, samples=5 00:12:40.914 lat (usec) : 500=5.81%, 750=46.25%, 1000=17.92% 00:12:40.914 lat (msec) : 2=13.32%, 4=0.73%, 50=15.74% 00:12:40.914 cpu : usr=0.03%, sys=0.23%, ctx=416, majf=0, minf=1 00:12:40.914 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:40.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.914 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.914 issued rwts: total=413,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:40.914 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:40.914 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1362276: Fri Jul 26 11:01:00 2024 00:12:40.914 read: IOPS=1677, BW=6708KiB/s (6869kB/s)(21.2MiB/3233msec) 00:12:40.914 slat (usec): min=7, max=14608, avg=11.14, stdev=199.32 00:12:40.914 clat (usec): min=349, max=42990, avg=582.93, stdev=2127.13 00:12:40.914 lat (usec): min=357, max=57599, avg=594.06, stdev=2195.19 00:12:40.914 clat percentiles (usec): 00:12:40.914 | 1.00th=[ 408], 5.00th=[ 412], 10.00th=[ 416], 20.00th=[ 416], 00:12:40.914 | 30.00th=[ 420], 40.00th=[ 424], 50.00th=[ 429], 60.00th=[ 445], 00:12:40.914 | 70.00th=[ 519], 80.00th=[ 537], 90.00th=[ 562], 95.00th=[ 578], 00:12:40.914 | 99.00th=[ 775], 99.50th=[ 1090], 99.90th=[42206], 99.95th=[42206], 00:12:40.914 | 99.99th=[42730] 00:12:40.914 bw ( KiB/s): min= 2086, max= 9040, per=51.15%, avg=7185.00, stdev=2643.08, samples=6 00:12:40.914 iops : min= 521, max= 2260, avg=1796.17, stdev=660.96, samples=6 00:12:40.914 lat (usec) : 500=64.15%, 750=34.69%, 1000=0.52% 00:12:40.914 lat (msec) : 2=0.35%, 50=0.28% 00:12:40.914 cpu : usr=0.93%, sys=2.63%, ctx=5428, majf=0, minf=1 00:12:40.914 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:40.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.914 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.914 issued rwts: total=5423,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:40.914 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:40.914 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1362279: Fri Jul 26 11:01:00 2024 00:12:40.914 read: IOPS=25, BW=99.3KiB/s (102kB/s)(280KiB/2820msec) 00:12:40.914 slat (nsec): min=9692, max=32261, avg=22402.80, stdev=2964.74 00:12:40.914 clat (usec): min=842, max=43040, avg=40239.26, stdev=8382.68 00:12:40.914 lat (usec): min=874, max=43063, avg=40261.65, stdev=8381.70 00:12:40.914 clat percentiles (usec): 00:12:40.914 | 1.00th=[ 840], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:12:40.914 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:12:40.914 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:12:40.914 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:12:40.914 | 99.99th=[43254] 00:12:40.914 bw ( KiB/s): min= 88, max= 104, per=0.69%, avg=97.60, stdev= 6.69, samples=5 00:12:40.914 iops : min= 22, max= 26, avg=24.40, stdev= 1.67, samples=5 00:12:40.914 lat (usec) : 1000=1.41% 00:12:40.914 lat (msec) : 2=2.82%, 50=94.37% 00:12:40.914 cpu : usr=0.11%, sys=0.00%, ctx=71, majf=0, minf=1 00:12:40.914 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:40.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.914 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.914 issued rwts: total=71,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:40.914 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:40.914 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1362284: Fri Jul 26 11:01:00 2024 00:12:40.914 read: IOPS=2073, BW=8292KiB/s (8491kB/s)(21.3MiB/2628msec) 00:12:40.914 slat (nsec): min=7279, max=39173, avg=8442.93, stdev=1383.66 00:12:40.914 clat (usec): min=367, max=1650, avg=471.56, stdev=99.24 00:12:40.914 lat (usec): min=375, max=1660, avg=480.00, stdev=99.46 00:12:40.914 clat percentiles (usec): 00:12:40.914 | 1.00th=[ 408], 5.00th=[ 412], 10.00th=[ 412], 20.00th=[ 416], 00:12:40.914 | 30.00th=[ 420], 40.00th=[ 424], 50.00th=[ 429], 60.00th=[ 441], 00:12:40.914 | 70.00th=[ 515], 80.00th=[ 529], 90.00th=[ 545], 95.00th=[ 570], 00:12:40.914 | 99.00th=[ 930], 99.50th=[ 1123], 99.90th=[ 1582], 99.95th=[ 1598], 00:12:40.914 | 99.99th=[ 1647] 00:12:40.914 bw ( KiB/s): min= 6840, max= 9040, per=58.87%, avg=8268.80, stdev=920.62, samples=5 00:12:40.914 iops : min= 1710, max= 2260, avg=2067.20, stdev=230.15, samples=5 00:12:40.914 lat (usec) : 500=66.10%, 750=32.37%, 1000=0.66% 00:12:40.914 lat (msec) : 2=0.84% 00:12:40.914 cpu : usr=1.03%, sys=3.54%, ctx=5451, majf=0, minf=2 00:12:40.914 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:40.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.914 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.914 issued rwts: total=5449,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:40.914 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:40.914 00:12:40.914 Run status group 0 (all jobs): 00:12:40.914 READ: bw=13.7MiB/s (14.4MB/s), 99.3KiB/s-8292KiB/s (102kB/s-8491kB/s), io=44.3MiB (46.5MB), run=2628-3233msec 00:12:40.914 00:12:40.914 Disk stats (read/write): 00:12:40.914 nvme0n1: ios=170/0, merge=0/0, ticks=2790/0, in_queue=2790, util=93.89% 00:12:40.914 nvme0n2: ios=5454/0, merge=0/0, ticks=3955/0, in_queue=3955, util=98.70% 00:12:40.914 nvme0n3: ios=69/0, merge=0/0, ticks=2777/0, in_queue=2777, util=96.20% 00:12:40.914 nvme0n4: ios=5337/0, merge=0/0, ticks=2628/0, in_queue=2628, util=100.00% 00:12:40.914 11:01:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:40.914 11:01:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:41.173 11:01:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:41.173 11:01:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:41.432 11:01:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:41.432 11:01:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:41.691 11:01:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:41.691 11:01:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:41.691 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:41.691 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1362135 00:12:41.691 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:41.691 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:41.950 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.950 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:41.950 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:12:41.950 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:41.950 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.950 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:41.951 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.951 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:12:41.951 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:41.951 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:41.951 nvmf hotplug test: fio failed as expected 00:12:41.951 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:42.211 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:42.211 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:42.211 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:42.211 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:42.211 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:42.211 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:42.211 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:12:42.211 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:42.211 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:12:42.211 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:42.211 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:42.211 rmmod nvme_tcp 00:12:42.211 rmmod nvme_fabrics 00:12:42.211 rmmod nvme_keyring 00:12:42.211 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:42.211 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:12:42.211 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:12:42.211 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1359213 ']' 00:12:42.211 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1359213 00:12:42.211 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1359213 ']' 00:12:42.211 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1359213 00:12:42.211 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:12:42.211 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:42.211 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1359213 00:12:42.211 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:42.211 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:42.211 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1359213' 00:12:42.211 killing process with pid 1359213 00:12:42.211 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1359213 00:12:42.211 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1359213 00:12:42.472 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:42.472 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:42.472 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:42.472 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:42.472 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:42.472 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.472 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:42.472 11:01:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.380 11:01:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:44.380 00:12:44.380 real 0m26.327s 00:12:44.380 user 1m46.629s 00:12:44.380 sys 0m7.531s 00:12:44.380 11:01:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:44.380 11:01:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.381 ************************************ 00:12:44.381 END TEST nvmf_fio_target 00:12:44.381 ************************************ 00:12:44.381 11:01:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:44.381 11:01:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:44.381 11:01:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:44.381 11:01:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:44.640 ************************************ 00:12:44.640 START TEST nvmf_bdevio 00:12:44.640 ************************************ 00:12:44.640 11:01:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:44.640 * Looking for test storage... 00:12:44.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:44.640 11:01:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:44.640 11:01:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:44.640 11:01:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:44.640 11:01:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:44.640 11:01:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:44.640 11:01:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:44.640 11:01:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:44.640 11:01:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:44.640 11:01:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:44.640 11:01:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:44.640 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:44.640 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:44.640 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:44.640 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:44.640 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:44.640 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:44.640 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:44.640 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:44.640 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:44.640 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:44.640 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:44.640 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:44.640 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.640 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.640 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.640 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:44.640 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.640 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:12:44.640 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:44.640 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:44.640 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:44.640 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:44.640 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:44.640 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:44.640 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:44.640 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:44.640 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:44.640 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:44.640 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:44.640 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:44.640 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:44.640 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:44.640 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:44.640 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:44.640 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.640 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:44.640 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.640 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:44.640 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:44.640 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:12:44.640 11:01:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:49.941 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:49.941 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:12:49.941 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:49.941 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:49.941 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:49.941 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:49.941 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:49.941 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:12:49.941 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:49.941 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:12:49.941 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:12:49.941 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:12:49.941 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:12:49.941 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:12:49.941 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:12:49.941 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:49.941 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:49.941 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:49.941 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:49.941 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:49.941 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:49.941 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:49.941 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:49.941 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:49.942 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:49.942 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:49.942 Found net devices under 0000:86:00.0: cvl_0_0 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:49.942 Found net devices under 0000:86:00.1: cvl_0_1 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:49.942 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:49.942 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:12:49.942 00:12:49.942 --- 10.0.0.2 ping statistics --- 00:12:49.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.942 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:49.942 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:49.942 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.374 ms 00:12:49.942 00:12:49.942 --- 10.0.0.1 ping statistics --- 00:12:49.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.942 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1366533 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1366533 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1366533 ']' 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:49.942 11:01:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:49.942 [2024-07-26 11:01:09.437153] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:49.942 [2024-07-26 11:01:09.437195] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:50.201 EAL: No free 2048 kB hugepages reported on node 1 00:12:50.201 [2024-07-26 11:01:09.494777] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:50.201 [2024-07-26 11:01:09.575174] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:50.201 [2024-07-26 11:01:09.575211] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:50.201 [2024-07-26 11:01:09.575220] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:50.201 [2024-07-26 11:01:09.575228] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:50.201 [2024-07-26 11:01:09.575234] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:50.201 [2024-07-26 11:01:09.575346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:50.201 [2024-07-26 11:01:09.575452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:12:50.201 [2024-07-26 11:01:09.575561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:50.201 [2024-07-26 11:01:09.575561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:12:50.766 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:50.766 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:12:50.766 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:50.766 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:50.766 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:51.024 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:51.024 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:51.024 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.024 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:51.024 [2024-07-26 11:01:10.281391] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:51.024 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.024 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:51.024 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.024 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:51.024 Malloc0 00:12:51.024 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.024 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:51.024 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.024 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:51.024 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.024 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:51.024 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.024 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:51.024 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.024 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.024 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.024 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:51.024 [2024-07-26 11:01:10.332735] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.024 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.024 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:51.024 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:51.024 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:12:51.024 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:12:51.024 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:51.024 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:51.024 { 00:12:51.024 "params": { 00:12:51.024 "name": "Nvme$subsystem", 00:12:51.024 "trtype": "$TEST_TRANSPORT", 00:12:51.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:51.024 "adrfam": "ipv4", 00:12:51.024 "trsvcid": "$NVMF_PORT", 00:12:51.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:51.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:51.024 "hdgst": ${hdgst:-false}, 00:12:51.024 "ddgst": ${ddgst:-false} 00:12:51.024 }, 00:12:51.024 "method": "bdev_nvme_attach_controller" 00:12:51.024 } 00:12:51.024 EOF 00:12:51.024 )") 00:12:51.024 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:12:51.024 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:12:51.024 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:12:51.024 11:01:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:51.024 "params": { 00:12:51.024 "name": "Nvme1", 00:12:51.024 "trtype": "tcp", 00:12:51.024 "traddr": "10.0.0.2", 00:12:51.024 "adrfam": "ipv4", 00:12:51.024 "trsvcid": "4420", 00:12:51.024 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:51.024 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:51.024 "hdgst": false, 00:12:51.024 "ddgst": false 00:12:51.024 }, 00:12:51.024 "method": "bdev_nvme_attach_controller" 00:12:51.024 }' 00:12:51.024 [2024-07-26 11:01:10.382405] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:51.024 [2024-07-26 11:01:10.382452] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1366783 ] 00:12:51.024 EAL: No free 2048 kB hugepages reported on node 1 00:12:51.024 [2024-07-26 11:01:10.438248] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:51.024 [2024-07-26 11:01:10.513532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.024 [2024-07-26 11:01:10.513627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:51.024 [2024-07-26 11:01:10.513628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.283 I/O targets: 00:12:51.283 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:51.283 00:12:51.283 00:12:51.283 CUnit - A unit testing framework for C - Version 2.1-3 00:12:51.283 http://cunit.sourceforge.net/ 00:12:51.283 00:12:51.283 00:12:51.283 Suite: bdevio tests on: Nvme1n1 00:12:51.283 Test: blockdev write read block ...passed 00:12:51.283 Test: blockdev write zeroes read block ...passed 00:12:51.541 Test: blockdev write zeroes read no split ...passed 00:12:51.541 Test: blockdev write zeroes read split ...passed 00:12:51.541 Test: blockdev write zeroes read split partial ...passed 00:12:51.541 Test: blockdev reset ...[2024-07-26 11:01:10.919870] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:12:51.541 [2024-07-26 11:01:10.919942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5a6d0 (9): Bad file descriptor 00:12:51.541 [2024-07-26 11:01:10.981408] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:51.541 passed 00:12:51.541 Test: blockdev write read 8 blocks ...passed 00:12:51.541 Test: blockdev write read size > 128k ...passed 00:12:51.541 Test: blockdev write read invalid size ...passed 00:12:51.799 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:51.799 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:51.799 Test: blockdev write read max offset ...passed 00:12:51.799 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:51.799 Test: blockdev writev readv 8 blocks ...passed 00:12:51.799 Test: blockdev writev readv 30 x 1block ...passed 00:12:51.799 Test: blockdev writev readv block ...passed 00:12:51.799 Test: blockdev writev readv size > 128k ...passed 00:12:51.799 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:51.799 Test: blockdev comparev and writev ...[2024-07-26 11:01:11.217272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:51.799 [2024-07-26 11:01:11.217298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:51.799 [2024-07-26 11:01:11.217312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:51.799 [2024-07-26 11:01:11.217320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:51.799 [2024-07-26 11:01:11.217926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:51.799 [2024-07-26 11:01:11.217937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:51.799 [2024-07-26 11:01:11.217948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:51.799 [2024-07-26 11:01:11.217956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:51.799 [2024-07-26 11:01:11.218506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:51.799 [2024-07-26 11:01:11.218516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:51.799 [2024-07-26 11:01:11.218528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:51.799 [2024-07-26 11:01:11.218534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:51.799 [2024-07-26 11:01:11.219062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:51.799 [2024-07-26 11:01:11.219073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:51.799 [2024-07-26 11:01:11.219085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:51.799 [2024-07-26 11:01:11.219096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:51.799 passed 00:12:52.058 Test: blockdev nvme passthru rw ...passed 00:12:52.058 Test: blockdev nvme passthru vendor specific ...[2024-07-26 11:01:11.303063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:52.058 [2024-07-26 11:01:11.303083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:52.058 [2024-07-26 11:01:11.303549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:52.058 [2024-07-26 11:01:11.303559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:52.058 [2024-07-26 11:01:11.303965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:52.058 [2024-07-26 11:01:11.303974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:52.058 [2024-07-26 11:01:11.304387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:52.058 [2024-07-26 11:01:11.304397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:52.058 passed 00:12:52.058 Test: blockdev nvme admin passthru ...passed 00:12:52.058 Test: blockdev copy ...passed 00:12:52.058 00:12:52.058 Run Summary: Type Total Ran Passed Failed Inactive 00:12:52.058 suites 1 1 n/a 0 0 00:12:52.058 tests 23 23 23 0 0 00:12:52.058 asserts 152 152 152 0 n/a 00:12:52.058 00:12:52.058 Elapsed time = 1.377 seconds 00:12:52.058 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.058 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.058 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:52.058 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.058 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:52.058 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:52.058 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:52.058 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:12:52.058 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:52.058 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:12:52.058 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:52.058 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:52.058 rmmod nvme_tcp 00:12:52.317 rmmod nvme_fabrics 00:12:52.317 rmmod nvme_keyring 00:12:52.317 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:52.317 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:12:52.317 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:12:52.317 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1366533 ']' 00:12:52.317 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1366533 00:12:52.317 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1366533 ']' 00:12:52.317 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1366533 00:12:52.317 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:12:52.317 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:52.317 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1366533 00:12:52.317 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:12:52.317 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:12:52.317 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1366533' 00:12:52.317 killing process with pid 1366533 00:12:52.317 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1366533 00:12:52.317 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1366533 00:12:52.576 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:52.576 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:52.576 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:52.576 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:52.576 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:52.576 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.576 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:52.576 11:01:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.484 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:54.484 00:12:54.484 real 0m10.003s 00:12:54.485 user 0m12.659s 00:12:54.485 sys 0m4.616s 00:12:54.485 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:54.485 11:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:54.485 ************************************ 00:12:54.485 END TEST nvmf_bdevio 00:12:54.485 ************************************ 00:12:54.485 11:01:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:54.485 00:12:54.485 real 4m34.694s 00:12:54.485 user 10m32.594s 00:12:54.485 sys 1m31.078s 00:12:54.485 11:01:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:54.485 11:01:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:54.485 ************************************ 00:12:54.485 END TEST nvmf_target_core 00:12:54.485 ************************************ 00:12:54.485 11:01:13 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:54.485 11:01:13 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:54.485 11:01:13 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:54.485 11:01:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:54.744 ************************************ 00:12:54.744 START TEST nvmf_target_extra 00:12:54.744 ************************************ 00:12:54.744 11:01:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:54.744 * Looking for test storage... 00:12:54.744 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:12:54.744 11:01:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:54.744 11:01:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:54.744 11:01:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:54.744 11:01:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:54.744 11:01:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:54.744 11:01:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:54.744 11:01:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:54.744 11:01:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:54.744 11:01:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:54.744 11:01:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:54.744 11:01:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:54.744 11:01:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:54.744 11:01:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:54.744 11:01:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:54.744 11:01:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:54.744 11:01:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:54.744 11:01:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:54.744 11:01:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:54.744 11:01:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:54.744 11:01:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.744 11:01:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.745 11:01:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.745 11:01:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.745 11:01:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.745 11:01:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.745 11:01:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:54.745 11:01:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.745 11:01:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:12:54.745 11:01:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:54.745 11:01:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:54.745 11:01:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:54.745 11:01:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:54.745 11:01:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:54.745 11:01:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:54.745 11:01:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:54.745 11:01:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:54.745 11:01:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:54.745 11:01:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:54.745 11:01:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:12:54.745 11:01:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:54.745 11:01:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:54.745 11:01:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:54.745 11:01:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:54.745 ************************************ 00:12:54.745 START TEST nvmf_example 00:12:54.745 ************************************ 00:12:54.745 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:54.745 * Looking for test storage... 00:12:54.745 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:54.745 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:54.745 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:12:55.005 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:00.284 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:00.284 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:00.284 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:00.285 Found net devices under 0000:86:00.0: cvl_0_0 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:00.285 Found net devices under 0000:86:00.1: cvl_0_1 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:00.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:00.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:13:00.285 00:13:00.285 --- 10.0.0.2 ping statistics --- 00:13:00.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.285 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:00.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:00.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.427 ms 00:13:00.285 00:13:00.285 --- 10.0.0.1 ping statistics --- 00:13:00.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.285 rtt min/avg/max/mdev = 0.427/0.427/0.427/0.000 ms 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1370558 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1370558 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 1370558 ']' 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:00.285 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:00.285 EAL: No free 2048 kB hugepages reported on node 1 00:13:01.221 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:01.221 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:13:01.221 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:13:01.221 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:01.221 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:01.221 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:01.221 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.221 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:01.221 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.221 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:13:01.221 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.221 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:01.221 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.221 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:13:01.221 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:01.221 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.221 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:01.221 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.221 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:13:01.221 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:01.221 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.221 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:01.221 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.221 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.221 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.221 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:01.221 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.221 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:13:01.221 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:01.221 EAL: No free 2048 kB hugepages reported on node 1 00:13:13.424 Initializing NVMe Controllers 00:13:13.424 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:13.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:13.424 Initialization complete. Launching workers. 00:13:13.424 ======================================================== 00:13:13.424 Latency(us) 00:13:13.424 Device Information : IOPS MiB/s Average min max 00:13:13.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13180.30 51.49 4858.56 715.41 18856.76 00:13:13.424 ======================================================== 00:13:13.424 Total : 13180.30 51.49 4858.56 715.41 18856.76 00:13:13.424 00:13:13.424 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:13:13.424 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:13:13.424 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:13.424 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:13:13.424 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:13.424 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:13:13.424 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:13.424 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:13.424 rmmod nvme_tcp 00:13:13.424 rmmod nvme_fabrics 00:13:13.424 rmmod nvme_keyring 00:13:13.424 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:13.424 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:13:13.424 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:13:13.424 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1370558 ']' 00:13:13.424 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1370558 00:13:13.424 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 1370558 ']' 00:13:13.424 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 1370558 00:13:13.424 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:13:13.424 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:13.424 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1370558 00:13:13.424 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:13:13.424 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:13:13.424 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1370558' 00:13:13.424 killing process with pid 1370558 00:13:13.424 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 1370558 00:13:13.424 11:01:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 1370558 00:13:13.424 nvmf threads initialize successfully 00:13:13.424 bdev subsystem init successfully 00:13:13.424 created a nvmf target service 00:13:13.424 create targets's poll groups done 00:13:13.424 all subsystems of target started 00:13:13.424 nvmf target is running 00:13:13.424 all subsystems of target stopped 00:13:13.424 destroy targets's poll groups done 00:13:13.424 destroyed the nvmf target service 00:13:13.424 bdev subsystem finish successfully 00:13:13.424 nvmf threads destroy successfully 00:13:13.424 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:13.424 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:13.424 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:13.424 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:13.424 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:13.424 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.424 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:13.424 11:01:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.683 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:13.683 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:13:13.683 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:13.683 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:13.943 00:13:13.943 real 0m19.033s 00:13:13.943 user 0m45.719s 00:13:13.943 sys 0m5.311s 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:13.943 ************************************ 00:13:13.943 END TEST nvmf_example 00:13:13.943 ************************************ 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:13.943 ************************************ 00:13:13.943 START TEST nvmf_filesystem 00:13:13.943 ************************************ 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:13.943 * Looking for test storage... 00:13:13.943 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:13:13.943 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:13:13.944 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:13:13.944 #define SPDK_CONFIG_H 00:13:13.944 #define SPDK_CONFIG_APPS 1 00:13:13.944 #define SPDK_CONFIG_ARCH native 00:13:13.944 #undef SPDK_CONFIG_ASAN 00:13:13.944 #undef SPDK_CONFIG_AVAHI 00:13:13.944 #undef SPDK_CONFIG_CET 00:13:13.944 #define SPDK_CONFIG_COVERAGE 1 00:13:13.944 #define SPDK_CONFIG_CROSS_PREFIX 00:13:13.944 #undef SPDK_CONFIG_CRYPTO 00:13:13.944 #undef SPDK_CONFIG_CRYPTO_MLX5 00:13:13.944 #undef SPDK_CONFIG_CUSTOMOCF 00:13:13.944 #undef SPDK_CONFIG_DAOS 00:13:13.944 #define SPDK_CONFIG_DAOS_DIR 00:13:13.944 #define SPDK_CONFIG_DEBUG 1 00:13:13.944 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:13:13.944 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:13:13.944 #define SPDK_CONFIG_DPDK_INC_DIR 00:13:13.944 #define SPDK_CONFIG_DPDK_LIB_DIR 00:13:13.944 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:13:13.944 #undef SPDK_CONFIG_DPDK_UADK 00:13:13.944 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:13:13.944 #define SPDK_CONFIG_EXAMPLES 1 00:13:13.944 #undef SPDK_CONFIG_FC 00:13:13.944 #define SPDK_CONFIG_FC_PATH 00:13:13.944 #define SPDK_CONFIG_FIO_PLUGIN 1 00:13:13.944 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:13:13.944 #undef SPDK_CONFIG_FUSE 00:13:13.944 #undef SPDK_CONFIG_FUZZER 00:13:13.944 #define SPDK_CONFIG_FUZZER_LIB 00:13:13.944 #undef SPDK_CONFIG_GOLANG 00:13:13.944 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:13:13.944 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:13:13.944 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:13:13.944 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:13:13.944 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:13:13.944 #undef SPDK_CONFIG_HAVE_LIBBSD 00:13:13.944 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:13:13.944 #define SPDK_CONFIG_IDXD 1 00:13:13.944 #define SPDK_CONFIG_IDXD_KERNEL 1 00:13:13.944 #undef SPDK_CONFIG_IPSEC_MB 00:13:13.944 #define SPDK_CONFIG_IPSEC_MB_DIR 00:13:13.944 #define SPDK_CONFIG_ISAL 1 00:13:13.944 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:13:13.944 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:13:13.944 #define SPDK_CONFIG_LIBDIR 00:13:13.944 #undef SPDK_CONFIG_LTO 00:13:13.944 #define SPDK_CONFIG_MAX_LCORES 128 00:13:13.945 #define SPDK_CONFIG_NVME_CUSE 1 00:13:13.945 #undef SPDK_CONFIG_OCF 00:13:13.945 #define SPDK_CONFIG_OCF_PATH 00:13:13.945 #define SPDK_CONFIG_OPENSSL_PATH 00:13:13.945 #undef SPDK_CONFIG_PGO_CAPTURE 00:13:13.945 #define SPDK_CONFIG_PGO_DIR 00:13:13.945 #undef SPDK_CONFIG_PGO_USE 00:13:13.945 #define SPDK_CONFIG_PREFIX /usr/local 00:13:13.945 #undef SPDK_CONFIG_RAID5F 00:13:13.945 #undef SPDK_CONFIG_RBD 00:13:13.945 #define SPDK_CONFIG_RDMA 1 00:13:13.945 #define SPDK_CONFIG_RDMA_PROV verbs 00:13:13.945 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:13:13.945 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:13:13.945 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:13:13.945 #define SPDK_CONFIG_SHARED 1 00:13:13.945 #undef SPDK_CONFIG_SMA 00:13:13.945 #define SPDK_CONFIG_TESTS 1 00:13:13.945 #undef SPDK_CONFIG_TSAN 00:13:13.945 #define SPDK_CONFIG_UBLK 1 00:13:13.945 #define SPDK_CONFIG_UBSAN 1 00:13:13.945 #undef SPDK_CONFIG_UNIT_TESTS 00:13:13.945 #undef SPDK_CONFIG_URING 00:13:13.945 #define SPDK_CONFIG_URING_PATH 00:13:13.945 #undef SPDK_CONFIG_URING_ZNS 00:13:13.945 #undef SPDK_CONFIG_USDT 00:13:13.945 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:13:13.945 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:13:13.945 #define SPDK_CONFIG_VFIO_USER 1 00:13:13.945 #define SPDK_CONFIG_VFIO_USER_DIR 00:13:13.945 #define SPDK_CONFIG_VHOST 1 00:13:13.945 #define SPDK_CONFIG_VIRTIO 1 00:13:13.945 #undef SPDK_CONFIG_VTUNE 00:13:13.945 #define SPDK_CONFIG_VTUNE_DIR 00:13:13.945 #define SPDK_CONFIG_WERROR 1 00:13:13.945 #define SPDK_CONFIG_WPDK_DIR 00:13:13.945 #undef SPDK_CONFIG_XNVME 00:13:13.945 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:13:13.945 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:13:13.946 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:13:13.947 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:13.947 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:13.947 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:13.947 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:13.947 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:13:13.947 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:13:13.947 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:13:13.947 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:13:13.947 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:13:13.947 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:13:13.947 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:13.947 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:13.947 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:13.947 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:13.947 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:13:13.947 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:13:13.947 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:13:14.206 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:13:14.206 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:14.206 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:14.206 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:14.206 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:14.206 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:13:14.206 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:13:14.206 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:14.206 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:14.206 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:14.206 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:14.206 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:14.206 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:14.206 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:14.206 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:14.206 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:14.206 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:14.206 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:14.206 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:14.206 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j96 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=tcp 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 1372826 ]] 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 1372826 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.OT54Oq 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.OT54Oq/tests/target /tmp/spdk.OT54Oq 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=950202368 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4334227456 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=185130655744 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=195974283264 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=10843627520 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=97924960256 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=97987141632 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=62181376 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=39171829760 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=39194857472 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=23027712 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=97984126976 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=97987141632 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=3014656 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=19597422592 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=19597426688 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:13:14.207 * Looking for test storage... 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/ 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=185130655744 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # new_size=13058220032 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:13:14.207 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:14.208 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:13:14.208 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:19.476 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:19.476 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:19.476 Found net devices under 0000:86:00.0: cvl_0_0 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:19.476 Found net devices under 0000:86:00.1: cvl_0_1 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:19.476 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:19.477 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:19.477 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:19.477 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:19.477 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:19.477 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:19.477 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:19.477 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:19.477 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:13:19.477 00:13:19.477 --- 10.0.0.2 ping statistics --- 00:13:19.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.477 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:13:19.477 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:19.477 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:19.477 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.519 ms 00:13:19.477 00:13:19.477 --- 10.0.0.1 ping statistics --- 00:13:19.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.477 rtt min/avg/max/mdev = 0.519/0.519/0.519/0.000 ms 00:13:19.477 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:19.477 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:13:19.477 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:19.477 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:19.477 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:19.477 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:19.477 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:19.477 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:19.477 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:19.736 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:13:19.736 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:19.736 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:19.736 11:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:19.736 ************************************ 00:13:19.736 START TEST nvmf_filesystem_no_in_capsule 00:13:19.736 ************************************ 00:13:19.736 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:13:19.736 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:13:19.736 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:19.736 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:19.736 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:19.736 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:19.736 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1375876 00:13:19.736 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1375876 00:13:19.736 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:19.736 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1375876 ']' 00:13:19.736 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.736 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:19.736 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.736 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:19.736 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:19.736 [2024-07-26 11:01:39.071865] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:19.736 [2024-07-26 11:01:39.071913] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:19.736 EAL: No free 2048 kB hugepages reported on node 1 00:13:19.736 [2024-07-26 11:01:39.130124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:19.737 [2024-07-26 11:01:39.212753] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:19.737 [2024-07-26 11:01:39.212790] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:19.737 [2024-07-26 11:01:39.212797] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:19.737 [2024-07-26 11:01:39.212803] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:19.737 [2024-07-26 11:01:39.212808] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:19.737 [2024-07-26 11:01:39.212853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:19.737 [2024-07-26 11:01:39.212948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:19.737 [2024-07-26 11:01:39.213032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:19.737 [2024-07-26 11:01:39.213034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.672 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:20.672 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:13:20.672 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:20.672 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:20.672 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:20.672 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:20.672 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:20.672 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:20.672 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.672 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:20.672 [2024-07-26 11:01:39.930321] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:20.672 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.672 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:20.672 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.672 11:01:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:20.672 Malloc1 00:13:20.672 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.672 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:20.672 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.672 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:20.672 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.672 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:20.672 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.672 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:20.672 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.672 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:20.672 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.672 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:20.672 [2024-07-26 11:01:40.083172] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:20.672 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.672 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:20.672 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:13:20.672 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:13:20.672 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:13:20.672 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:13:20.672 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:20.672 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.672 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:20.672 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.672 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:13:20.672 { 00:13:20.672 "name": "Malloc1", 00:13:20.672 "aliases": [ 00:13:20.672 "7da601d6-d688-4d53-bc2f-33f29f488b28" 00:13:20.672 ], 00:13:20.672 "product_name": "Malloc disk", 00:13:20.672 "block_size": 512, 00:13:20.672 "num_blocks": 1048576, 00:13:20.672 "uuid": "7da601d6-d688-4d53-bc2f-33f29f488b28", 00:13:20.672 "assigned_rate_limits": { 00:13:20.672 "rw_ios_per_sec": 0, 00:13:20.673 "rw_mbytes_per_sec": 0, 00:13:20.673 "r_mbytes_per_sec": 0, 00:13:20.673 "w_mbytes_per_sec": 0 00:13:20.673 }, 00:13:20.673 "claimed": true, 00:13:20.673 "claim_type": "exclusive_write", 00:13:20.673 "zoned": false, 00:13:20.673 "supported_io_types": { 00:13:20.673 "read": true, 00:13:20.673 "write": true, 00:13:20.673 "unmap": true, 00:13:20.673 "flush": true, 00:13:20.673 "reset": true, 00:13:20.673 "nvme_admin": false, 00:13:20.673 "nvme_io": false, 00:13:20.673 "nvme_io_md": false, 00:13:20.673 "write_zeroes": true, 00:13:20.673 "zcopy": true, 00:13:20.673 "get_zone_info": false, 00:13:20.673 "zone_management": false, 00:13:20.673 "zone_append": false, 00:13:20.673 "compare": false, 00:13:20.673 "compare_and_write": false, 00:13:20.673 "abort": true, 00:13:20.673 "seek_hole": false, 00:13:20.673 "seek_data": false, 00:13:20.673 "copy": true, 00:13:20.673 "nvme_iov_md": false 00:13:20.673 }, 00:13:20.673 "memory_domains": [ 00:13:20.673 { 00:13:20.673 "dma_device_id": "system", 00:13:20.673 "dma_device_type": 1 00:13:20.673 }, 00:13:20.673 { 00:13:20.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.673 "dma_device_type": 2 00:13:20.673 } 00:13:20.673 ], 00:13:20.673 "driver_specific": {} 00:13:20.673 } 00:13:20.673 ]' 00:13:20.673 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:13:20.673 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:13:20.673 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:13:20.931 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:13:20.931 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:13:20.931 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:13:20.931 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:20.931 11:01:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:21.894 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:21.894 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:13:21.894 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:21.894 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:21.894 11:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:13:24.422 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:24.422 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:24.422 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:24.422 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:24.422 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:24.422 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:13:24.422 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:24.423 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:24.423 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:24.423 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:24.423 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:24.423 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:24.423 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:24.423 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:24.423 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:24.423 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:24.423 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:24.423 11:01:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:24.989 11:01:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:25.922 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:13:25.922 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:25.922 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:25.922 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:25.922 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:25.922 ************************************ 00:13:25.922 START TEST filesystem_ext4 00:13:25.922 ************************************ 00:13:25.922 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:25.922 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:25.922 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:25.922 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:25.922 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:13:25.922 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:25.922 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:13:25.922 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:13:25.922 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:13:25.922 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:13:25.922 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:25.922 mke2fs 1.46.5 (30-Dec-2021) 00:13:26.180 Discarding device blocks: 0/522240 done 00:13:26.180 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:26.180 Filesystem UUID: 28eb7c3f-f167-423c-98fe-e09ae08b793e 00:13:26.180 Superblock backups stored on blocks: 00:13:26.180 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:26.180 00:13:26.180 Allocating group tables: 0/64 done 00:13:26.180 Writing inode tables: 0/64 done 00:13:26.438 Creating journal (8192 blocks): done 00:13:27.261 Writing superblocks and filesystem accounting information: 0/64 4/64 done 00:13:27.261 00:13:27.261 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:13:27.261 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:28.196 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:28.196 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:13:28.196 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:28.196 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:13:28.196 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:28.196 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:28.196 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1375876 00:13:28.196 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:28.196 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:28.196 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:28.196 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:28.196 00:13:28.196 real 0m2.126s 00:13:28.196 user 0m0.021s 00:13:28.196 sys 0m0.049s 00:13:28.196 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:28.196 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:28.196 ************************************ 00:13:28.196 END TEST filesystem_ext4 00:13:28.196 ************************************ 00:13:28.196 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:28.196 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:28.196 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:28.196 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:28.196 ************************************ 00:13:28.196 START TEST filesystem_btrfs 00:13:28.196 ************************************ 00:13:28.196 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:28.196 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:28.196 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:28.196 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:28.196 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:13:28.196 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:28.196 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:13:28.196 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:13:28.196 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:13:28.196 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:13:28.196 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:28.454 btrfs-progs v6.6.2 00:13:28.454 See https://btrfs.readthedocs.io for more information. 00:13:28.454 00:13:28.454 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:28.454 NOTE: several default settings have changed in version 5.15, please make sure 00:13:28.454 this does not affect your deployments: 00:13:28.454 - DUP for metadata (-m dup) 00:13:28.454 - enabled no-holes (-O no-holes) 00:13:28.454 - enabled free-space-tree (-R free-space-tree) 00:13:28.454 00:13:28.454 Label: (null) 00:13:28.454 UUID: a9b83457-60d2-437f-958b-490e2ba949cf 00:13:28.454 Node size: 16384 00:13:28.454 Sector size: 4096 00:13:28.454 Filesystem size: 510.00MiB 00:13:28.454 Block group profiles: 00:13:28.454 Data: single 8.00MiB 00:13:28.454 Metadata: DUP 32.00MiB 00:13:28.454 System: DUP 8.00MiB 00:13:28.454 SSD detected: yes 00:13:28.454 Zoned device: no 00:13:28.454 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:13:28.454 Runtime features: free-space-tree 00:13:28.454 Checksum: crc32c 00:13:28.454 Number of devices: 1 00:13:28.454 Devices: 00:13:28.454 ID SIZE PATH 00:13:28.454 1 510.00MiB /dev/nvme0n1p1 00:13:28.454 00:13:28.454 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:13:28.454 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:29.020 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:29.020 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:13:29.020 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:29.020 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:13:29.020 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:29.020 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:29.020 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1375876 00:13:29.020 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:29.020 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:29.020 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:29.020 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:29.020 00:13:29.020 real 0m0.947s 00:13:29.020 user 0m0.021s 00:13:29.020 sys 0m0.061s 00:13:29.020 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:29.020 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:29.020 ************************************ 00:13:29.020 END TEST filesystem_btrfs 00:13:29.020 ************************************ 00:13:29.279 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:13:29.279 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:29.279 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:29.279 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:29.279 ************************************ 00:13:29.279 START TEST filesystem_xfs 00:13:29.279 ************************************ 00:13:29.279 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:13:29.279 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:29.279 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:29.279 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:29.279 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:13:29.279 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:29.279 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:13:29.279 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:13:29.279 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:13:29.279 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:13:29.279 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:29.279 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:29.279 = sectsz=512 attr=2, projid32bit=1 00:13:29.279 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:29.279 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:29.279 data = bsize=4096 blocks=130560, imaxpct=25 00:13:29.279 = sunit=0 swidth=0 blks 00:13:29.279 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:29.279 log =internal log bsize=4096 blocks=16384, version=2 00:13:29.279 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:29.279 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:30.213 Discarding blocks...Done. 00:13:30.213 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:13:30.213 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:32.115 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:32.115 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:13:32.115 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:32.115 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:13:32.115 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:13:32.115 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:32.115 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1375876 00:13:32.115 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:32.115 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:32.115 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:32.115 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:32.115 00:13:32.115 real 0m2.799s 00:13:32.115 user 0m0.027s 00:13:32.115 sys 0m0.045s 00:13:32.115 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:32.115 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:32.115 ************************************ 00:13:32.115 END TEST filesystem_xfs 00:13:32.115 ************************************ 00:13:32.115 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:32.115 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:32.115 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:32.115 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.115 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:32.115 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:13:32.115 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:32.115 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:32.115 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:32.115 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:32.115 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:13:32.115 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:32.115 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.115 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:32.373 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.373 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:32.373 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1375876 00:13:32.373 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1375876 ']' 00:13:32.373 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1375876 00:13:32.373 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:13:32.373 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:32.373 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1375876 00:13:32.373 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:32.373 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:32.373 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1375876' 00:13:32.373 killing process with pid 1375876 00:13:32.373 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 1375876 00:13:32.373 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 1375876 00:13:32.632 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:32.632 00:13:32.632 real 0m12.989s 00:13:32.632 user 0m51.025s 00:13:32.632 sys 0m1.083s 00:13:32.632 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:32.632 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:32.632 ************************************ 00:13:32.632 END TEST nvmf_filesystem_no_in_capsule 00:13:32.632 ************************************ 00:13:32.632 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:13:32.632 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:32.632 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:32.632 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:32.632 ************************************ 00:13:32.632 START TEST nvmf_filesystem_in_capsule 00:13:32.632 ************************************ 00:13:32.632 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:13:32.632 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:13:32.632 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:32.632 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:32.632 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:32.632 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:32.632 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1378291 00:13:32.633 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1378291 00:13:32.633 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:32.633 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1378291 ']' 00:13:32.633 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.633 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:32.633 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.633 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:32.633 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:32.891 [2024-07-26 11:01:52.130903] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:32.891 [2024-07-26 11:01:52.130944] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:32.891 EAL: No free 2048 kB hugepages reported on node 1 00:13:32.891 [2024-07-26 11:01:52.186915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:32.891 [2024-07-26 11:01:52.267537] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:32.891 [2024-07-26 11:01:52.267575] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:32.891 [2024-07-26 11:01:52.267583] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:32.891 [2024-07-26 11:01:52.267589] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:32.891 [2024-07-26 11:01:52.267594] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:32.891 [2024-07-26 11:01:52.267629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.891 [2024-07-26 11:01:52.267725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:32.891 [2024-07-26 11:01:52.267815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:32.891 [2024-07-26 11:01:52.267817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.456 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:33.456 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:13:33.456 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:33.456 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:33.456 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:33.714 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:33.714 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:33.714 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:13:33.714 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.714 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:33.714 [2024-07-26 11:01:52.986393] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:33.714 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.714 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:33.714 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.715 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:33.715 Malloc1 00:13:33.715 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.715 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:33.715 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.715 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:33.715 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.715 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:33.715 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.715 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:33.715 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.715 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:33.715 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.715 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:33.715 [2024-07-26 11:01:53.139816] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:33.715 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.715 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:33.715 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:13:33.715 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:13:33.715 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:13:33.715 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:13:33.715 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:33.715 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.715 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:33.715 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.715 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:13:33.715 { 00:13:33.715 "name": "Malloc1", 00:13:33.715 "aliases": [ 00:13:33.715 "18935e06-d54d-4b86-9d29-90349bb10e08" 00:13:33.715 ], 00:13:33.715 "product_name": "Malloc disk", 00:13:33.715 "block_size": 512, 00:13:33.715 "num_blocks": 1048576, 00:13:33.715 "uuid": "18935e06-d54d-4b86-9d29-90349bb10e08", 00:13:33.715 "assigned_rate_limits": { 00:13:33.715 "rw_ios_per_sec": 0, 00:13:33.715 "rw_mbytes_per_sec": 0, 00:13:33.715 "r_mbytes_per_sec": 0, 00:13:33.715 "w_mbytes_per_sec": 0 00:13:33.715 }, 00:13:33.715 "claimed": true, 00:13:33.715 "claim_type": "exclusive_write", 00:13:33.715 "zoned": false, 00:13:33.715 "supported_io_types": { 00:13:33.715 "read": true, 00:13:33.715 "write": true, 00:13:33.715 "unmap": true, 00:13:33.715 "flush": true, 00:13:33.715 "reset": true, 00:13:33.715 "nvme_admin": false, 00:13:33.715 "nvme_io": false, 00:13:33.715 "nvme_io_md": false, 00:13:33.715 "write_zeroes": true, 00:13:33.715 "zcopy": true, 00:13:33.715 "get_zone_info": false, 00:13:33.715 "zone_management": false, 00:13:33.715 "zone_append": false, 00:13:33.715 "compare": false, 00:13:33.715 "compare_and_write": false, 00:13:33.715 "abort": true, 00:13:33.715 "seek_hole": false, 00:13:33.715 "seek_data": false, 00:13:33.715 "copy": true, 00:13:33.715 "nvme_iov_md": false 00:13:33.715 }, 00:13:33.715 "memory_domains": [ 00:13:33.715 { 00:13:33.715 "dma_device_id": "system", 00:13:33.715 "dma_device_type": 1 00:13:33.715 }, 00:13:33.715 { 00:13:33.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.715 "dma_device_type": 2 00:13:33.715 } 00:13:33.715 ], 00:13:33.715 "driver_specific": {} 00:13:33.715 } 00:13:33.715 ]' 00:13:33.715 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:13:33.715 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:13:33.973 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:13:33.973 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:13:33.973 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:13:33.973 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:13:33.973 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:33.973 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:35.346 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:35.346 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:13:35.346 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:35.346 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:35.346 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:13:37.246 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:37.246 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:37.247 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:37.247 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:37.247 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:37.247 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:13:37.247 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:37.247 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:37.247 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:37.247 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:37.247 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:37.247 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:37.247 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:37.247 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:37.247 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:37.247 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:37.247 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:37.247 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:37.505 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:38.441 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:13:38.441 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:38.441 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:38.441 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:38.441 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:38.441 ************************************ 00:13:38.441 START TEST filesystem_in_capsule_ext4 00:13:38.441 ************************************ 00:13:38.441 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:38.441 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:38.441 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:38.441 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:38.441 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:13:38.441 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:38.441 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:13:38.441 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:13:38.441 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:13:38.441 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:13:38.441 11:01:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:38.441 mke2fs 1.46.5 (30-Dec-2021) 00:13:38.441 Discarding device blocks: 0/522240 done 00:13:38.441 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:38.441 Filesystem UUID: 4adac35f-1b43-43f0-85e5-75c944dff545 00:13:38.441 Superblock backups stored on blocks: 00:13:38.441 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:38.441 00:13:38.441 Allocating group tables: 0/64 done 00:13:38.441 Writing inode tables: 0/64 done 00:13:38.700 Creating journal (8192 blocks): done 00:13:39.635 Writing superblocks and filesystem accounting information: 0/64 done 00:13:39.635 00:13:39.635 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:13:39.635 11:01:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:39.894 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:39.894 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:13:39.894 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:39.894 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:13:39.894 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:39.894 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:39.894 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1378291 00:13:39.894 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:39.894 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:39.894 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:39.894 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:39.894 00:13:39.894 real 0m1.427s 00:13:39.894 user 0m0.020s 00:13:39.894 sys 0m0.047s 00:13:39.894 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:39.894 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:39.894 ************************************ 00:13:39.894 END TEST filesystem_in_capsule_ext4 00:13:39.894 ************************************ 00:13:39.894 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:39.894 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:39.894 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:39.894 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:39.894 ************************************ 00:13:39.894 START TEST filesystem_in_capsule_btrfs 00:13:39.894 ************************************ 00:13:39.894 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:39.894 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:39.894 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:39.894 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:39.894 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:13:39.894 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:39.894 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:13:39.894 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:13:39.894 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:13:39.894 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:13:39.894 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:40.461 btrfs-progs v6.6.2 00:13:40.461 See https://btrfs.readthedocs.io for more information. 00:13:40.461 00:13:40.461 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:40.461 NOTE: several default settings have changed in version 5.15, please make sure 00:13:40.461 this does not affect your deployments: 00:13:40.461 - DUP for metadata (-m dup) 00:13:40.461 - enabled no-holes (-O no-holes) 00:13:40.461 - enabled free-space-tree (-R free-space-tree) 00:13:40.461 00:13:40.461 Label: (null) 00:13:40.461 UUID: fcb7b857-e620-43ca-9344-80d0f794a88b 00:13:40.461 Node size: 16384 00:13:40.461 Sector size: 4096 00:13:40.461 Filesystem size: 510.00MiB 00:13:40.461 Block group profiles: 00:13:40.461 Data: single 8.00MiB 00:13:40.461 Metadata: DUP 32.00MiB 00:13:40.461 System: DUP 8.00MiB 00:13:40.461 SSD detected: yes 00:13:40.461 Zoned device: no 00:13:40.461 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:13:40.461 Runtime features: free-space-tree 00:13:40.461 Checksum: crc32c 00:13:40.461 Number of devices: 1 00:13:40.461 Devices: 00:13:40.461 ID SIZE PATH 00:13:40.461 1 510.00MiB /dev/nvme0n1p1 00:13:40.461 00:13:40.461 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:13:40.461 11:01:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:41.430 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:41.430 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:13:41.430 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:41.430 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:13:41.430 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:41.430 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:41.430 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1378291 00:13:41.430 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:41.430 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:41.430 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:41.430 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:41.430 00:13:41.430 real 0m1.424s 00:13:41.430 user 0m0.030s 00:13:41.430 sys 0m0.052s 00:13:41.430 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:41.430 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:41.430 ************************************ 00:13:41.430 END TEST filesystem_in_capsule_btrfs 00:13:41.430 ************************************ 00:13:41.430 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:13:41.430 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:41.430 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:41.430 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:41.430 ************************************ 00:13:41.430 START TEST filesystem_in_capsule_xfs 00:13:41.430 ************************************ 00:13:41.430 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:13:41.430 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:41.430 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:41.430 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:41.430 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:13:41.430 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:41.430 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:13:41.430 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:13:41.430 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:13:41.430 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:13:41.430 11:02:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:41.430 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:41.430 = sectsz=512 attr=2, projid32bit=1 00:13:41.430 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:41.430 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:41.430 data = bsize=4096 blocks=130560, imaxpct=25 00:13:41.431 = sunit=0 swidth=0 blks 00:13:41.431 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:41.431 log =internal log bsize=4096 blocks=16384, version=2 00:13:41.431 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:41.431 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:42.473 Discarding blocks...Done. 00:13:42.473 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:13:42.473 11:02:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:44.367 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:44.626 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:13:44.626 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:44.626 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:13:44.626 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:13:44.626 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:44.626 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1378291 00:13:44.626 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:44.626 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:44.626 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:44.626 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:44.626 00:13:44.626 real 0m3.139s 00:13:44.626 user 0m0.023s 00:13:44.626 sys 0m0.050s 00:13:44.626 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:44.626 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:44.626 ************************************ 00:13:44.626 END TEST filesystem_in_capsule_xfs 00:13:44.626 ************************************ 00:13:44.626 11:02:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:44.626 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:44.626 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:44.626 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.626 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:44.626 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:13:44.626 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:44.626 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:44.626 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:44.626 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:44.626 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:13:44.626 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:44.626 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.626 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:44.885 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.885 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:44.885 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1378291 00:13:44.885 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1378291 ']' 00:13:44.885 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1378291 00:13:44.885 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:13:44.885 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:44.885 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1378291 00:13:44.885 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:44.885 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:44.885 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1378291' 00:13:44.885 killing process with pid 1378291 00:13:44.885 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 1378291 00:13:44.885 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 1378291 00:13:45.144 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:45.144 00:13:45.144 real 0m12.452s 00:13:45.144 user 0m48.904s 00:13:45.144 sys 0m1.052s 00:13:45.144 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:45.144 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:45.144 ************************************ 00:13:45.144 END TEST nvmf_filesystem_in_capsule 00:13:45.144 ************************************ 00:13:45.144 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:13:45.144 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:45.144 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:13:45.144 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:45.144 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:13:45.144 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:45.144 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:45.144 rmmod nvme_tcp 00:13:45.144 rmmod nvme_fabrics 00:13:45.144 rmmod nvme_keyring 00:13:45.144 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:45.144 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:13:45.144 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:13:45.144 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:13:45.144 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:45.144 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:45.144 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:45.144 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:45.144 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:45.144 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.144 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:45.144 11:02:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:47.679 00:13:47.679 real 0m33.418s 00:13:47.679 user 1m41.652s 00:13:47.679 sys 0m6.411s 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:47.679 ************************************ 00:13:47.679 END TEST nvmf_filesystem 00:13:47.679 ************************************ 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:47.679 ************************************ 00:13:47.679 START TEST nvmf_target_discovery 00:13:47.679 ************************************ 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:47.679 * Looking for test storage... 00:13:47.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.679 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:47.680 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:47.680 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:47.680 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:47.680 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:13:47.680 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:52.968 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:52.968 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:13:52.968 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:52.968 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:52.968 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:52.968 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:52.968 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:52.968 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:13:52.968 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:52.968 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:13:52.968 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:13:52.968 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:13:52.968 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:13:52.968 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:13:52.968 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:13:52.968 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:52.968 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:52.968 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:52.968 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:52.968 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:52.968 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:52.968 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:52.968 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:52.968 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:52.968 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:52.968 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:52.968 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:52.968 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:52.968 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:52.968 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:52.968 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:52.968 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:52.968 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:52.969 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:52.969 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:52.969 Found net devices under 0000:86:00.0: cvl_0_0 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:52.969 Found net devices under 0000:86:00.1: cvl_0_1 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:52.969 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:52.969 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:52.969 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:52.969 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:52.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:52.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:13:52.969 00:13:52.969 --- 10.0.0.2 ping statistics --- 00:13:52.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.969 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:13:52.969 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:52.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:52.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.422 ms 00:13:52.969 00:13:52.969 --- 10.0.0.1 ping statistics --- 00:13:52.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.969 rtt min/avg/max/mdev = 0.422/0.422/0.422/0.000 ms 00:13:52.969 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:52.969 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:13:52.969 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:52.969 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:52.969 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:52.969 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:52.969 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:52.969 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:52.969 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:52.969 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:13:52.969 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:52.969 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:52.969 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:52.969 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1383859 00:13:52.969 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:52.969 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1383859 00:13:52.969 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 1383859 ']' 00:13:52.969 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.969 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:52.969 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.969 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:52.969 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:52.969 [2024-07-26 11:02:12.130883] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:52.969 [2024-07-26 11:02:12.130927] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:52.969 EAL: No free 2048 kB hugepages reported on node 1 00:13:52.969 [2024-07-26 11:02:12.186353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:52.969 [2024-07-26 11:02:12.267344] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:52.969 [2024-07-26 11:02:12.267380] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:52.969 [2024-07-26 11:02:12.267387] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:52.969 [2024-07-26 11:02:12.267394] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:52.969 [2024-07-26 11:02:12.267399] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:52.970 [2024-07-26 11:02:12.267437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.970 [2024-07-26 11:02:12.267532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:52.970 [2024-07-26 11:02:12.267626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:52.970 [2024-07-26 11:02:12.267627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.539 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:53.539 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:13:53.539 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:53.539 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:53.539 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.539 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:53.539 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:53.539 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.539 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.539 [2024-07-26 11:02:12.990427] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:53.539 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.539 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:13:53.539 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:53.539 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:13:53.539 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.539 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.539 Null1 00:13:53.539 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.539 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:53.539 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.539 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.539 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.539 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:13:53.539 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.539 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.539 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.539 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:53.539 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.539 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.799 [2024-07-26 11:02:13.035923] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:53.799 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.799 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:53.799 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:13:53.799 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.799 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.799 Null2 00:13:53.799 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.799 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:13:53.799 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.799 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.799 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.799 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:13:53.799 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.799 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.799 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.799 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:53.799 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.799 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.799 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.799 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:53.799 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:13:53.799 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.799 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.799 Null3 00:13:53.799 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.799 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:13:53.799 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.799 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.799 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.799 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:13:53.799 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.799 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.799 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.799 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:13:53.800 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.800 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.800 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.800 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:53.800 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:13:53.800 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.800 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.800 Null4 00:13:53.800 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.800 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:13:53.800 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.800 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.800 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.800 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:13:53.800 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.800 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.800 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.800 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:13:53.800 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.800 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.800 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.800 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:53.800 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.800 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.800 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.800 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:13:53.800 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.800 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:53.800 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.800 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:13:54.066 00:13:54.066 Discovery Log Number of Records 6, Generation counter 6 00:13:54.066 =====Discovery Log Entry 0====== 00:13:54.066 trtype: tcp 00:13:54.066 adrfam: ipv4 00:13:54.066 subtype: current discovery subsystem 00:13:54.066 treq: not required 00:13:54.066 portid: 0 00:13:54.066 trsvcid: 4420 00:13:54.066 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:54.066 traddr: 10.0.0.2 00:13:54.066 eflags: explicit discovery connections, duplicate discovery information 00:13:54.066 sectype: none 00:13:54.066 =====Discovery Log Entry 1====== 00:13:54.066 trtype: tcp 00:13:54.066 adrfam: ipv4 00:13:54.066 subtype: nvme subsystem 00:13:54.066 treq: not required 00:13:54.066 portid: 0 00:13:54.066 trsvcid: 4420 00:13:54.066 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:54.066 traddr: 10.0.0.2 00:13:54.066 eflags: none 00:13:54.066 sectype: none 00:13:54.066 =====Discovery Log Entry 2====== 00:13:54.066 trtype: tcp 00:13:54.066 adrfam: ipv4 00:13:54.066 subtype: nvme subsystem 00:13:54.066 treq: not required 00:13:54.066 portid: 0 00:13:54.066 trsvcid: 4420 00:13:54.066 subnqn: nqn.2016-06.io.spdk:cnode2 00:13:54.066 traddr: 10.0.0.2 00:13:54.066 eflags: none 00:13:54.066 sectype: none 00:13:54.066 =====Discovery Log Entry 3====== 00:13:54.066 trtype: tcp 00:13:54.066 adrfam: ipv4 00:13:54.066 subtype: nvme subsystem 00:13:54.066 treq: not required 00:13:54.066 portid: 0 00:13:54.066 trsvcid: 4420 00:13:54.066 subnqn: nqn.2016-06.io.spdk:cnode3 00:13:54.066 traddr: 10.0.0.2 00:13:54.066 eflags: none 00:13:54.066 sectype: none 00:13:54.066 =====Discovery Log Entry 4====== 00:13:54.066 trtype: tcp 00:13:54.066 adrfam: ipv4 00:13:54.066 subtype: nvme subsystem 00:13:54.066 treq: not required 00:13:54.066 portid: 0 00:13:54.066 trsvcid: 4420 00:13:54.066 subnqn: nqn.2016-06.io.spdk:cnode4 00:13:54.066 traddr: 10.0.0.2 00:13:54.066 eflags: none 00:13:54.066 sectype: none 00:13:54.066 =====Discovery Log Entry 5====== 00:13:54.066 trtype: tcp 00:13:54.066 adrfam: ipv4 00:13:54.066 subtype: discovery subsystem referral 00:13:54.066 treq: not required 00:13:54.066 portid: 0 00:13:54.066 trsvcid: 4430 00:13:54.066 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:54.066 traddr: 10.0.0.2 00:13:54.066 eflags: none 00:13:54.066 sectype: none 00:13:54.066 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:13:54.066 Perform nvmf subsystem discovery via RPC 00:13:54.066 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:13:54.066 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.066 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:54.066 [ 00:13:54.066 { 00:13:54.066 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:54.066 "subtype": "Discovery", 00:13:54.066 "listen_addresses": [ 00:13:54.066 { 00:13:54.066 "trtype": "TCP", 00:13:54.066 "adrfam": "IPv4", 00:13:54.066 "traddr": "10.0.0.2", 00:13:54.066 "trsvcid": "4420" 00:13:54.066 } 00:13:54.066 ], 00:13:54.066 "allow_any_host": true, 00:13:54.066 "hosts": [] 00:13:54.066 }, 00:13:54.066 { 00:13:54.066 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:54.066 "subtype": "NVMe", 00:13:54.066 "listen_addresses": [ 00:13:54.066 { 00:13:54.066 "trtype": "TCP", 00:13:54.066 "adrfam": "IPv4", 00:13:54.066 "traddr": "10.0.0.2", 00:13:54.066 "trsvcid": "4420" 00:13:54.066 } 00:13:54.066 ], 00:13:54.066 "allow_any_host": true, 00:13:54.066 "hosts": [], 00:13:54.066 "serial_number": "SPDK00000000000001", 00:13:54.066 "model_number": "SPDK bdev Controller", 00:13:54.066 "max_namespaces": 32, 00:13:54.066 "min_cntlid": 1, 00:13:54.066 "max_cntlid": 65519, 00:13:54.066 "namespaces": [ 00:13:54.066 { 00:13:54.066 "nsid": 1, 00:13:54.066 "bdev_name": "Null1", 00:13:54.066 "name": "Null1", 00:13:54.066 "nguid": "1FB6543BBB03444F8990F169D2FC7E6A", 00:13:54.066 "uuid": "1fb6543b-bb03-444f-8990-f169d2fc7e6a" 00:13:54.066 } 00:13:54.066 ] 00:13:54.066 }, 00:13:54.066 { 00:13:54.066 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:54.066 "subtype": "NVMe", 00:13:54.066 "listen_addresses": [ 00:13:54.066 { 00:13:54.066 "trtype": "TCP", 00:13:54.066 "adrfam": "IPv4", 00:13:54.066 "traddr": "10.0.0.2", 00:13:54.066 "trsvcid": "4420" 00:13:54.066 } 00:13:54.066 ], 00:13:54.066 "allow_any_host": true, 00:13:54.066 "hosts": [], 00:13:54.066 "serial_number": "SPDK00000000000002", 00:13:54.066 "model_number": "SPDK bdev Controller", 00:13:54.066 "max_namespaces": 32, 00:13:54.066 "min_cntlid": 1, 00:13:54.066 "max_cntlid": 65519, 00:13:54.066 "namespaces": [ 00:13:54.066 { 00:13:54.066 "nsid": 1, 00:13:54.066 "bdev_name": "Null2", 00:13:54.066 "name": "Null2", 00:13:54.066 "nguid": "C0E6F494D14C49C2AADDBC791C4375CD", 00:13:54.066 "uuid": "c0e6f494-d14c-49c2-aadd-bc791c4375cd" 00:13:54.066 } 00:13:54.066 ] 00:13:54.066 }, 00:13:54.066 { 00:13:54.066 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:13:54.066 "subtype": "NVMe", 00:13:54.066 "listen_addresses": [ 00:13:54.066 { 00:13:54.066 "trtype": "TCP", 00:13:54.066 "adrfam": "IPv4", 00:13:54.066 "traddr": "10.0.0.2", 00:13:54.066 "trsvcid": "4420" 00:13:54.066 } 00:13:54.066 ], 00:13:54.066 "allow_any_host": true, 00:13:54.066 "hosts": [], 00:13:54.066 "serial_number": "SPDK00000000000003", 00:13:54.066 "model_number": "SPDK bdev Controller", 00:13:54.066 "max_namespaces": 32, 00:13:54.066 "min_cntlid": 1, 00:13:54.066 "max_cntlid": 65519, 00:13:54.066 "namespaces": [ 00:13:54.066 { 00:13:54.066 "nsid": 1, 00:13:54.066 "bdev_name": "Null3", 00:13:54.066 "name": "Null3", 00:13:54.066 "nguid": "7E2002CDC356437DB44E6B4D04DFCBE3", 00:13:54.066 "uuid": "7e2002cd-c356-437d-b44e-6b4d04dfcbe3" 00:13:54.066 } 00:13:54.066 ] 00:13:54.066 }, 00:13:54.066 { 00:13:54.066 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:13:54.066 "subtype": "NVMe", 00:13:54.066 "listen_addresses": [ 00:13:54.066 { 00:13:54.066 "trtype": "TCP", 00:13:54.066 "adrfam": "IPv4", 00:13:54.066 "traddr": "10.0.0.2", 00:13:54.066 "trsvcid": "4420" 00:13:54.066 } 00:13:54.066 ], 00:13:54.066 "allow_any_host": true, 00:13:54.066 "hosts": [], 00:13:54.066 "serial_number": "SPDK00000000000004", 00:13:54.066 "model_number": "SPDK bdev Controller", 00:13:54.066 "max_namespaces": 32, 00:13:54.066 "min_cntlid": 1, 00:13:54.066 "max_cntlid": 65519, 00:13:54.066 "namespaces": [ 00:13:54.066 { 00:13:54.066 "nsid": 1, 00:13:54.066 "bdev_name": "Null4", 00:13:54.066 "name": "Null4", 00:13:54.066 "nguid": "BAB9986B6F6044FE8D1084C5D3A8F957", 00:13:54.066 "uuid": "bab9986b-6f60-44fe-8d10-84c5d3a8f957" 00:13:54.066 } 00:13:54.066 ] 00:13:54.066 } 00:13:54.066 ] 00:13:54.066 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.066 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:13:54.066 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:54.066 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:54.066 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.066 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:54.066 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.066 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:13:54.066 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:54.067 rmmod nvme_tcp 00:13:54.067 rmmod nvme_fabrics 00:13:54.067 rmmod nvme_keyring 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1383859 ']' 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1383859 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 1383859 ']' 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 1383859 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:54.067 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1383859 00:13:54.327 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:54.327 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:54.327 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1383859' 00:13:54.327 killing process with pid 1383859 00:13:54.327 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 1383859 00:13:54.327 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 1383859 00:13:54.327 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:54.327 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:54.327 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:54.327 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:54.327 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:54.327 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.327 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:54.327 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.868 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:56.868 00:13:56.868 real 0m9.089s 00:13:56.868 user 0m7.564s 00:13:56.868 sys 0m4.304s 00:13:56.868 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:56.868 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:56.868 ************************************ 00:13:56.868 END TEST nvmf_target_discovery 00:13:56.868 ************************************ 00:13:56.868 11:02:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:56.868 11:02:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:56.868 11:02:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:56.868 11:02:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:56.868 ************************************ 00:13:56.868 START TEST nvmf_referrals 00:13:56.868 ************************************ 00:13:56.868 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:56.868 * Looking for test storage... 00:13:56.868 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:56.868 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:56.868 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:13:56.868 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:56.868 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:56.868 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:56.868 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:56.869 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:56.869 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:56.869 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:56.869 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:56.869 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:56.869 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:56.869 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:56.869 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:56.869 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:56.869 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:56.869 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:56.869 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:56.869 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:56.869 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:56.869 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:56.869 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:56.869 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.869 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.869 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.869 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:13:56.869 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.869 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:13:56.869 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:56.869 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:56.869 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:56.869 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:56.869 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:56.869 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:56.869 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:56.869 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:56.869 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:13:56.869 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:13:56.869 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:13:56.869 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:13:56.869 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:13:56.869 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:13:56.869 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:13:56.869 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:56.869 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:56.869 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:56.869 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:56.869 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:56.869 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.869 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:56.869 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.869 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:56.869 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:56.869 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:13:56.869 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:02.154 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:02.154 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:02.154 Found net devices under 0000:86:00.0: cvl_0_0 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.154 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:02.155 Found net devices under 0000:86:00.1: cvl_0_1 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:02.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:02.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:14:02.155 00:14:02.155 --- 10.0.0.2 ping statistics --- 00:14:02.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.155 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:02.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:02.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:14:02.155 00:14:02.155 --- 10.0.0.1 ping statistics --- 00:14:02.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.155 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1387636 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1387636 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 1387636 ']' 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:02.155 11:02:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:02.155 [2024-07-26 11:02:21.584327] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:02.155 [2024-07-26 11:02:21.584373] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.155 EAL: No free 2048 kB hugepages reported on node 1 00:14:02.155 [2024-07-26 11:02:21.641659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:02.415 [2024-07-26 11:02:21.715694] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:02.415 [2024-07-26 11:02:21.715736] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:02.415 [2024-07-26 11:02:21.715746] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:02.415 [2024-07-26 11:02:21.715752] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:02.415 [2024-07-26 11:02:21.715756] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:02.415 [2024-07-26 11:02:21.715844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.415 [2024-07-26 11:02:21.715944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:02.415 [2024-07-26 11:02:21.716019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:02.415 [2024-07-26 11:02:21.716021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.984 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:02.984 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:14:02.984 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:02.984 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:02.984 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:02.984 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.984 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:02.984 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.984 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:02.984 [2024-07-26 11:02:22.441426] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:02.984 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.984 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:14:02.984 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.984 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:02.984 [2024-07-26 11:02:22.454750] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:14:02.984 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.984 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:14:02.984 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.984 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:02.984 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.984 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:14:02.984 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.984 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:02.984 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.984 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:14:02.984 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.984 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:03.244 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.244 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:03.244 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:14:03.244 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.244 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:03.244 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.244 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:14:03.244 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:14:03.244 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:03.244 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:03.244 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:03.244 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.244 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:03.244 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:03.244 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.244 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:03.244 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:03.244 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:14:03.244 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:03.244 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:03.244 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:03.244 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:03.244 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:03.244 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:03.244 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:03.244 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:14:03.244 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.244 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:03.244 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.244 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:14:03.244 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.244 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:03.244 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.244 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:14:03.244 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.244 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:03.244 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.244 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:14:03.244 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:03.244 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.245 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:03.245 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.504 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:14:03.504 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:14:03.504 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:03.504 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:03.504 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:03.504 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:03.504 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:03.505 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:03.505 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:14:03.505 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:14:03.505 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.505 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:03.505 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.505 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:03.505 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.505 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:03.505 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.505 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:14:03.505 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:03.505 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:03.505 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:03.505 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.505 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:03.505 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:03.505 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.505 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:14:03.505 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:03.505 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:14:03.505 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:03.505 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:03.505 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:03.505 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:03.505 11:02:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:03.765 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:14:03.765 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:03.765 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:14:03.765 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:14:03.765 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:03.765 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:03.765 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:03.765 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:14:03.765 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:14:03.765 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:14:03.765 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:03.765 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:03.765 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:04.030 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:04.030 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:04.031 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.031 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:04.031 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.031 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:14:04.031 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:04.031 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:04.031 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:04.031 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.031 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:04.031 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:04.031 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.031 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:14:04.031 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:04.031 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:14:04.031 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:04.031 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:04.031 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:04.031 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:04.031 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:04.031 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:14:04.031 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:04.031 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:14:04.031 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:14:04.031 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:04.031 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:04.031 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:04.031 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:14:04.031 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:14:04.031 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:14:04.031 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:04.031 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:04.031 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:04.291 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:04.291 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:14:04.291 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.291 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:04.291 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.291 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:04.291 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:14:04.291 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.291 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:04.291 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.291 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:14:04.291 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:14:04.291 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:04.291 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:04.291 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:04.291 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:04.291 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:04.552 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:04.552 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:14:04.552 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:14:04.552 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:14:04.552 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:04.552 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:14:04.552 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:04.552 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:14:04.552 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:04.552 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:04.552 rmmod nvme_tcp 00:14:04.552 rmmod nvme_fabrics 00:14:04.552 rmmod nvme_keyring 00:14:04.552 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:04.552 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:14:04.552 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:14:04.552 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1387636 ']' 00:14:04.552 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1387636 00:14:04.552 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 1387636 ']' 00:14:04.552 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 1387636 00:14:04.552 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:14:04.552 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:04.552 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1387636 00:14:04.552 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:04.552 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:04.552 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1387636' 00:14:04.552 killing process with pid 1387636 00:14:04.552 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 1387636 00:14:04.552 11:02:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 1387636 00:14:04.812 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:04.812 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:04.812 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:04.812 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:04.812 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:04.812 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.812 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:04.812 11:02:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.787 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:06.787 00:14:06.787 real 0m10.233s 00:14:06.787 user 0m11.804s 00:14:06.787 sys 0m4.614s 00:14:06.787 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:06.787 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:06.787 ************************************ 00:14:06.787 END TEST nvmf_referrals 00:14:06.787 ************************************ 00:14:06.787 11:02:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:06.787 11:02:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:06.787 11:02:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:06.787 11:02:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:06.787 ************************************ 00:14:06.787 START TEST nvmf_connect_disconnect 00:14:06.787 ************************************ 00:14:06.787 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:07.047 * Looking for test storage... 00:14:07.048 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:14:07.048 11:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:12.328 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:12.328 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:14:12.328 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:12.328 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:12.328 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:12.328 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:12.328 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:12.328 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:14:12.328 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:12.328 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:14:12.328 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:14:12.328 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:14:12.328 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:14:12.328 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:14:12.328 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:14:12.328 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:12.328 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:12.328 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:12.328 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:12.328 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:12.328 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:12.329 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:12.329 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:12.329 Found net devices under 0000:86:00.0: cvl_0_0 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:12.329 Found net devices under 0000:86:00.1: cvl_0_1 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:12.329 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:12.589 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:12.589 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:12.589 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:12.589 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:12.589 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:14:12.589 00:14:12.589 --- 10.0.0.2 ping statistics --- 00:14:12.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.589 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:14:12.589 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:12.589 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:12.589 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:14:12.589 00:14:12.589 --- 10.0.0.1 ping statistics --- 00:14:12.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.589 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:14:12.589 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:12.589 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:14:12.589 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:12.589 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:12.589 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:12.589 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:12.589 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:12.589 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:12.589 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:12.589 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:14:12.589 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:12.590 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:12.590 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:12.590 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1391497 00:14:12.590 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1391497 00:14:12.590 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:12.590 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 1391497 ']' 00:14:12.590 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.590 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:12.590 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.590 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:12.590 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:12.590 [2024-07-26 11:02:31.965185] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:12.590 [2024-07-26 11:02:31.965227] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:12.590 EAL: No free 2048 kB hugepages reported on node 1 00:14:12.590 [2024-07-26 11:02:32.024202] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:12.850 [2024-07-26 11:02:32.101433] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:12.850 [2024-07-26 11:02:32.101476] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:12.850 [2024-07-26 11:02:32.101483] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:12.850 [2024-07-26 11:02:32.101489] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:12.850 [2024-07-26 11:02:32.101494] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:12.850 [2024-07-26 11:02:32.101534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.850 [2024-07-26 11:02:32.101632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:12.850 [2024-07-26 11:02:32.101695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:12.850 [2024-07-26 11:02:32.101696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.420 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:13.420 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:14:13.420 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:13.420 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:13.420 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:13.420 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:13.420 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:14:13.420 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.420 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:13.420 [2024-07-26 11:02:32.814389] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:13.420 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.420 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:14:13.420 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.420 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:13.420 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.420 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:14:13.420 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:13.420 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.420 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:13.420 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.420 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:13.420 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.420 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:13.420 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.420 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:13.420 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.420 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:13.420 [2024-07-26 11:02:32.866353] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:13.420 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.420 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:14:13.420 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:14:13.420 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:14:16.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.086 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.386 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.974 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.974 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:29.974 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:29.974 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:29.974 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:14:29.974 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:29.974 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:14:29.974 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:29.974 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:29.974 rmmod nvme_tcp 00:14:29.974 rmmod nvme_fabrics 00:14:29.974 rmmod nvme_keyring 00:14:29.974 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:29.974 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:14:29.974 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:14:29.974 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1391497 ']' 00:14:29.974 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1391497 00:14:29.974 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1391497 ']' 00:14:29.974 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 1391497 00:14:29.974 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:14:29.974 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:29.974 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1391497 00:14:29.974 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:29.974 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:29.974 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1391497' 00:14:29.974 killing process with pid 1391497 00:14:29.974 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 1391497 00:14:29.974 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 1391497 00:14:30.234 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:30.234 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:30.234 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:30.234 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:30.234 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:30.234 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.234 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:30.234 11:02:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:32.146 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:32.146 00:14:32.146 real 0m25.349s 00:14:32.146 user 1m10.738s 00:14:32.146 sys 0m5.226s 00:14:32.146 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:32.146 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:32.146 ************************************ 00:14:32.146 END TEST nvmf_connect_disconnect 00:14:32.146 ************************************ 00:14:32.146 11:02:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:32.146 11:02:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:32.146 11:02:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:32.146 11:02:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:32.146 ************************************ 00:14:32.146 START TEST nvmf_multitarget 00:14:32.146 ************************************ 00:14:32.146 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:32.406 * Looking for test storage... 00:14:32.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:32.406 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:32.406 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:14:32.406 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:32.406 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:32.406 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:32.406 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:32.406 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:32.406 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:32.406 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:32.406 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:32.406 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:32.406 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:32.406 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:32.406 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:32.406 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:32.406 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:32.406 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:32.406 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:32.406 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:32.406 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:32.406 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:32.406 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:32.406 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.406 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.406 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.406 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:14:32.406 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.406 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:14:32.406 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:32.406 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:32.406 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:32.407 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:32.407 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:32.407 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:32.407 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:32.407 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:32.407 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:32.407 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:14:32.407 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:32.407 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:32.407 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:32.407 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:32.407 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:32.407 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:32.407 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:32.407 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:32.407 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:32.407 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:32.407 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:14:32.407 11:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:37.755 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:37.755 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:37.755 Found net devices under 0000:86:00.0: cvl_0_0 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:37.755 Found net devices under 0000:86:00.1: cvl_0_1 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:37.755 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:38.016 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:38.016 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:38.016 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:38.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:38.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:14:38.016 00:14:38.016 --- 10.0.0.2 ping statistics --- 00:14:38.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.016 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:14:38.016 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:38.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:38.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:14:38.016 00:14:38.016 --- 10.0.0.1 ping statistics --- 00:14:38.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.016 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:14:38.016 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:38.016 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:14:38.016 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:38.016 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:38.016 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:38.016 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:38.016 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:38.016 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:38.016 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:38.016 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:38.016 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:38.016 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:38.016 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:38.016 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1398073 00:14:38.016 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1398073 00:14:38.016 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:38.016 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 1398073 ']' 00:14:38.016 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.016 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:38.016 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.016 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:38.016 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:38.016 [2024-07-26 11:02:57.396008] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:38.016 [2024-07-26 11:02:57.396057] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.016 EAL: No free 2048 kB hugepages reported on node 1 00:14:38.016 [2024-07-26 11:02:57.454239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:38.277 [2024-07-26 11:02:57.528557] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.277 [2024-07-26 11:02:57.528598] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.277 [2024-07-26 11:02:57.528605] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:38.277 [2024-07-26 11:02:57.528610] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:38.277 [2024-07-26 11:02:57.528615] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.277 [2024-07-26 11:02:57.528663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.277 [2024-07-26 11:02:57.528761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:38.277 [2024-07-26 11:02:57.528818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:38.277 [2024-07-26 11:02:57.528819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.848 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:38.848 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:14:38.848 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:38.848 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:38.848 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:38.848 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:38.848 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:38.848 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:38.848 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:14:38.848 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:38.848 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:39.108 "nvmf_tgt_1" 00:14:39.108 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:39.108 "nvmf_tgt_2" 00:14:39.108 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:39.108 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:14:39.367 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:39.367 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:39.367 true 00:14:39.367 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:39.367 true 00:14:39.628 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:39.628 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:14:39.628 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:39.628 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:39.628 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:14:39.628 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:39.628 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:14:39.628 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:39.628 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:14:39.628 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:39.629 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:39.629 rmmod nvme_tcp 00:14:39.629 rmmod nvme_fabrics 00:14:39.629 rmmod nvme_keyring 00:14:39.629 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:39.629 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:14:39.629 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:14:39.629 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1398073 ']' 00:14:39.629 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1398073 00:14:39.629 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 1398073 ']' 00:14:39.629 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 1398073 00:14:39.629 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:14:39.629 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:39.629 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1398073 00:14:39.629 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:39.629 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:39.629 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1398073' 00:14:39.629 killing process with pid 1398073 00:14:39.629 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 1398073 00:14:39.629 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 1398073 00:14:39.890 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:39.890 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:39.890 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:39.890 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:39.890 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:39.890 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:39.890 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:39.890 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.431 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:42.431 00:14:42.431 real 0m9.677s 00:14:42.431 user 0m9.051s 00:14:42.431 sys 0m4.709s 00:14:42.431 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:42.431 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:42.431 ************************************ 00:14:42.431 END TEST nvmf_multitarget 00:14:42.431 ************************************ 00:14:42.431 11:03:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:42.431 11:03:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:42.431 11:03:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:42.431 11:03:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:42.431 ************************************ 00:14:42.431 START TEST nvmf_rpc 00:14:42.431 ************************************ 00:14:42.431 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:42.431 * Looking for test storage... 00:14:42.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:42.431 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:42.431 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:14:42.431 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:42.431 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:42.431 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:42.431 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:42.431 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:42.431 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:42.431 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:42.431 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:42.431 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:42.431 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:42.431 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:42.431 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:42.431 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:42.431 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:42.431 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:42.431 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:42.432 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:42.432 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:42.432 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:42.432 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:42.432 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.432 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.432 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.432 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:14:42.432 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.432 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:14:42.432 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:42.432 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:42.432 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:42.432 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:42.432 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:42.432 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:42.432 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:42.432 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:42.432 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:14:42.432 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:14:42.432 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:42.432 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:42.432 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:42.432 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:42.432 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:42.432 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.432 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:42.432 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.432 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:42.432 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:42.432 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:14:42.432 11:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.718 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:47.718 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:14:47.718 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:47.718 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:47.718 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:47.718 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:47.718 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:47.718 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:14:47.718 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:47.718 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:14:47.718 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:14:47.718 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:14:47.718 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:14:47.718 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:14:47.718 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:14:47.718 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:47.718 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:47.718 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:47.718 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:47.718 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:47.718 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:47.718 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:47.718 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:47.718 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:47.718 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:47.718 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:47.718 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:47.718 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:47.718 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:47.718 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:47.719 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:47.719 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:47.719 Found net devices under 0000:86:00.0: cvl_0_0 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:47.719 Found net devices under 0000:86:00.1: cvl_0_1 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:47.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:47.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:14:47.719 00:14:47.719 --- 10.0.0.2 ping statistics --- 00:14:47.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.719 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:47.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:47.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.459 ms 00:14:47.719 00:14:47.719 --- 10.0.0.1 ping statistics --- 00:14:47.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.719 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1401827 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1401827 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 1401827 ']' 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:47.719 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.719 [2024-07-26 11:03:06.869622] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:47.720 [2024-07-26 11:03:06.869666] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:47.720 EAL: No free 2048 kB hugepages reported on node 1 00:14:47.720 [2024-07-26 11:03:06.928056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:47.720 [2024-07-26 11:03:07.009280] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:47.720 [2024-07-26 11:03:07.009318] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:47.720 [2024-07-26 11:03:07.009325] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:47.720 [2024-07-26 11:03:07.009331] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:47.720 [2024-07-26 11:03:07.009336] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:47.720 [2024-07-26 11:03:07.009383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:47.720 [2024-07-26 11:03:07.009481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:47.720 [2024-07-26 11:03:07.009568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:47.720 [2024-07-26 11:03:07.009568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.291 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:48.291 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:14:48.291 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:48.291 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:48.291 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.291 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:48.291 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:48.291 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.291 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.291 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.291 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:14:48.291 "tick_rate": 2300000000, 00:14:48.291 "poll_groups": [ 00:14:48.291 { 00:14:48.291 "name": "nvmf_tgt_poll_group_000", 00:14:48.291 "admin_qpairs": 0, 00:14:48.291 "io_qpairs": 0, 00:14:48.291 "current_admin_qpairs": 0, 00:14:48.291 "current_io_qpairs": 0, 00:14:48.291 "pending_bdev_io": 0, 00:14:48.291 "completed_nvme_io": 0, 00:14:48.291 "transports": [] 00:14:48.291 }, 00:14:48.291 { 00:14:48.291 "name": "nvmf_tgt_poll_group_001", 00:14:48.291 "admin_qpairs": 0, 00:14:48.291 "io_qpairs": 0, 00:14:48.291 "current_admin_qpairs": 0, 00:14:48.291 "current_io_qpairs": 0, 00:14:48.291 "pending_bdev_io": 0, 00:14:48.291 "completed_nvme_io": 0, 00:14:48.291 "transports": [] 00:14:48.291 }, 00:14:48.291 { 00:14:48.291 "name": "nvmf_tgt_poll_group_002", 00:14:48.291 "admin_qpairs": 0, 00:14:48.291 "io_qpairs": 0, 00:14:48.291 "current_admin_qpairs": 0, 00:14:48.291 "current_io_qpairs": 0, 00:14:48.291 "pending_bdev_io": 0, 00:14:48.291 "completed_nvme_io": 0, 00:14:48.291 "transports": [] 00:14:48.291 }, 00:14:48.291 { 00:14:48.291 "name": "nvmf_tgt_poll_group_003", 00:14:48.291 "admin_qpairs": 0, 00:14:48.292 "io_qpairs": 0, 00:14:48.292 "current_admin_qpairs": 0, 00:14:48.292 "current_io_qpairs": 0, 00:14:48.292 "pending_bdev_io": 0, 00:14:48.292 "completed_nvme_io": 0, 00:14:48.292 "transports": [] 00:14:48.292 } 00:14:48.292 ] 00:14:48.292 }' 00:14:48.292 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:48.292 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:48.292 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:48.292 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:48.292 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:48.552 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:48.552 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:48.552 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:48.552 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.552 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.552 [2024-07-26 11:03:07.836654] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:48.552 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.552 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:48.552 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.552 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.552 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.552 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:14:48.552 "tick_rate": 2300000000, 00:14:48.552 "poll_groups": [ 00:14:48.552 { 00:14:48.552 "name": "nvmf_tgt_poll_group_000", 00:14:48.552 "admin_qpairs": 0, 00:14:48.552 "io_qpairs": 0, 00:14:48.552 "current_admin_qpairs": 0, 00:14:48.552 "current_io_qpairs": 0, 00:14:48.552 "pending_bdev_io": 0, 00:14:48.553 "completed_nvme_io": 0, 00:14:48.553 "transports": [ 00:14:48.553 { 00:14:48.553 "trtype": "TCP" 00:14:48.553 } 00:14:48.553 ] 00:14:48.553 }, 00:14:48.553 { 00:14:48.553 "name": "nvmf_tgt_poll_group_001", 00:14:48.553 "admin_qpairs": 0, 00:14:48.553 "io_qpairs": 0, 00:14:48.553 "current_admin_qpairs": 0, 00:14:48.553 "current_io_qpairs": 0, 00:14:48.553 "pending_bdev_io": 0, 00:14:48.553 "completed_nvme_io": 0, 00:14:48.553 "transports": [ 00:14:48.553 { 00:14:48.553 "trtype": "TCP" 00:14:48.553 } 00:14:48.553 ] 00:14:48.553 }, 00:14:48.553 { 00:14:48.553 "name": "nvmf_tgt_poll_group_002", 00:14:48.553 "admin_qpairs": 0, 00:14:48.553 "io_qpairs": 0, 00:14:48.553 "current_admin_qpairs": 0, 00:14:48.553 "current_io_qpairs": 0, 00:14:48.553 "pending_bdev_io": 0, 00:14:48.553 "completed_nvme_io": 0, 00:14:48.553 "transports": [ 00:14:48.553 { 00:14:48.553 "trtype": "TCP" 00:14:48.553 } 00:14:48.553 ] 00:14:48.553 }, 00:14:48.553 { 00:14:48.553 "name": "nvmf_tgt_poll_group_003", 00:14:48.553 "admin_qpairs": 0, 00:14:48.553 "io_qpairs": 0, 00:14:48.553 "current_admin_qpairs": 0, 00:14:48.553 "current_io_qpairs": 0, 00:14:48.553 "pending_bdev_io": 0, 00:14:48.553 "completed_nvme_io": 0, 00:14:48.553 "transports": [ 00:14:48.553 { 00:14:48.553 "trtype": "TCP" 00:14:48.553 } 00:14:48.553 ] 00:14:48.553 } 00:14:48.553 ] 00:14:48.553 }' 00:14:48.553 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:48.553 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:48.553 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:48.553 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:48.553 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:48.553 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:48.553 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:48.553 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:48.553 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:48.553 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:48.553 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:14:48.553 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:48.553 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:48.553 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:48.553 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.553 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.553 Malloc1 00:14:48.553 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.553 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:48.553 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.553 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.553 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.553 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:48.553 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.553 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.553 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.553 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:48.553 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.553 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.553 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.553 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:48.553 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.553 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.553 [2024-07-26 11:03:08.008757] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:48.553 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.553 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:14:48.553 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:14:48.553 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:14:48.553 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:14:48.553 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:48.553 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:14:48.553 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:48.553 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:14:48.553 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:48.553 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:14:48.553 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:14:48.553 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:14:48.553 [2024-07-26 11:03:08.033615] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:14:48.553 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:48.553 could not add new controller: failed to write to nvme-fabrics device 00:14:48.553 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:14:48.553 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:48.553 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:48.553 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:48.553 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:48.553 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.553 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.553 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.553 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:49.935 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:49.936 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:49.936 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:49.936 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:49.936 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:51.844 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:51.844 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:51.844 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:51.844 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:51.844 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:51.844 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:51.844 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:51.844 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.844 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:51.844 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:51.844 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:51.844 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:51.844 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:51.844 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:51.844 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:51.844 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:51.844 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.844 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.844 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.844 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:51.844 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:14:51.844 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:51.844 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:14:51.844 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:51.844 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:14:51.844 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:51.844 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:14:51.844 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:51.844 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:14:51.844 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:14:51.844 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:51.844 [2024-07-26 11:03:11.318482] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:14:51.844 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:51.844 could not add new controller: failed to write to nvme-fabrics device 00:14:51.844 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:14:51.844 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:51.844 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:51.844 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:51.844 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:51.844 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.844 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.844 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.844 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:53.262 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:53.262 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:53.262 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:53.262 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:53.262 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:55.174 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:55.174 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:55.174 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:55.174 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:55.174 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:55.174 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:55.174 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:55.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.174 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:55.174 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:55.174 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:55.174 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:55.174 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:55.174 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:55.174 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:55.174 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:55.174 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.174 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:55.174 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.174 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:14:55.174 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:55.174 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:55.174 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.174 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:55.174 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.174 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:55.174 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.174 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:55.174 [2024-07-26 11:03:14.603032] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:55.174 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.174 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:55.174 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.174 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:55.174 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.174 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:55.175 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.175 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:55.175 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.175 11:03:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:56.558 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:56.558 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:56.558 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:56.558 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:56.558 11:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:58.466 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:58.466 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:58.466 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:58.466 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:58.466 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:58.466 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:58.466 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:58.466 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.466 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:58.466 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:58.466 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:58.466 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:58.466 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:58.466 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:58.466 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:58.466 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:58.466 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.466 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.466 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.466 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:58.466 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.466 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.466 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.466 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:58.466 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:58.466 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.466 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.467 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.467 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:58.467 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.467 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.467 [2024-07-26 11:03:17.894430] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:58.467 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.467 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:58.467 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.467 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.467 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.467 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:58.467 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.467 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.467 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.467 11:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:59.846 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:59.846 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:59.846 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:59.846 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:59.846 11:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:01.825 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:01.825 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:01.825 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:01.825 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:01.825 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:01.825 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:01.825 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:01.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.825 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:01.825 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:01.825 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:01.825 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:01.825 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:01.825 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:01.825 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:01.825 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:01.825 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.825 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:01.825 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.825 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:01.825 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.825 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:01.825 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.825 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:01.825 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:01.825 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.825 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:01.825 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.825 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:01.825 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.825 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:01.825 [2024-07-26 11:03:21.237371] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:01.825 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.825 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:01.825 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.825 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:01.825 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.825 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:01.825 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.825 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:01.825 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.825 11:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:03.205 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:03.205 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:03.205 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:03.205 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:03.205 11:03:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:05.114 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:05.114 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:05.114 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:05.114 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:05.114 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:05.114 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:05.114 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:05.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.114 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:05.114 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:05.114 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:05.114 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:05.114 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:05.114 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:05.114 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:05.114 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:05.114 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.114 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.114 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.114 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:05.114 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.114 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.114 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.114 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:05.115 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:05.115 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.115 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.115 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.115 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:05.115 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.115 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.115 [2024-07-26 11:03:24.532594] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:05.115 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.115 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:05.115 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.115 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.115 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.115 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:05.115 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.115 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.115 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.115 11:03:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:06.495 11:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:06.495 11:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:06.495 11:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:06.495 11:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:06.495 11:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:08.402 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:08.402 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:08.402 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:08.402 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:08.402 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:08.402 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:08.402 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:08.402 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.402 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:08.402 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:08.402 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:08.402 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:08.402 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:08.402 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:08.402 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:08.402 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:08.402 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.402 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:08.662 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.662 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:08.662 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.662 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:08.662 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.662 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:08.662 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:08.662 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.662 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:08.662 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.662 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:08.662 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.662 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:08.662 [2024-07-26 11:03:27.932935] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:08.662 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.662 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:08.662 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.662 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:08.662 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.662 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:08.662 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.662 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:08.662 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.662 11:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:10.044 11:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:10.044 11:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:10.044 11:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:10.044 11:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:10.044 11:03:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:11.953 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:11.953 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:11.953 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:11.953 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:11.953 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:11.953 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:11.953 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:11.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.953 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:11.953 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:11.953 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:11.953 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:11.953 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:11.953 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:11.953 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:11.953 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:11.953 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.954 [2024-07-26 11:03:31.277264] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.954 [2024-07-26 11:03:31.325365] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.954 [2024-07-26 11:03:31.377521] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.954 [2024-07-26 11:03:31.425664] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.954 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.214 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.214 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:12.214 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.214 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.214 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.215 [2024-07-26 11:03:31.473833] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:15:12.215 "tick_rate": 2300000000, 00:15:12.215 "poll_groups": [ 00:15:12.215 { 00:15:12.215 "name": "nvmf_tgt_poll_group_000", 00:15:12.215 "admin_qpairs": 2, 00:15:12.215 "io_qpairs": 168, 00:15:12.215 "current_admin_qpairs": 0, 00:15:12.215 "current_io_qpairs": 0, 00:15:12.215 "pending_bdev_io": 0, 00:15:12.215 "completed_nvme_io": 268, 00:15:12.215 "transports": [ 00:15:12.215 { 00:15:12.215 "trtype": "TCP" 00:15:12.215 } 00:15:12.215 ] 00:15:12.215 }, 00:15:12.215 { 00:15:12.215 "name": "nvmf_tgt_poll_group_001", 00:15:12.215 "admin_qpairs": 2, 00:15:12.215 "io_qpairs": 168, 00:15:12.215 "current_admin_qpairs": 0, 00:15:12.215 "current_io_qpairs": 0, 00:15:12.215 "pending_bdev_io": 0, 00:15:12.215 "completed_nvme_io": 219, 00:15:12.215 "transports": [ 00:15:12.215 { 00:15:12.215 "trtype": "TCP" 00:15:12.215 } 00:15:12.215 ] 00:15:12.215 }, 00:15:12.215 { 00:15:12.215 "name": "nvmf_tgt_poll_group_002", 00:15:12.215 "admin_qpairs": 1, 00:15:12.215 "io_qpairs": 168, 00:15:12.215 "current_admin_qpairs": 0, 00:15:12.215 "current_io_qpairs": 0, 00:15:12.215 "pending_bdev_io": 0, 00:15:12.215 "completed_nvme_io": 267, 00:15:12.215 "transports": [ 00:15:12.215 { 00:15:12.215 "trtype": "TCP" 00:15:12.215 } 00:15:12.215 ] 00:15:12.215 }, 00:15:12.215 { 00:15:12.215 "name": "nvmf_tgt_poll_group_003", 00:15:12.215 "admin_qpairs": 2, 00:15:12.215 "io_qpairs": 168, 00:15:12.215 "current_admin_qpairs": 0, 00:15:12.215 "current_io_qpairs": 0, 00:15:12.215 "pending_bdev_io": 0, 00:15:12.215 "completed_nvme_io": 268, 00:15:12.215 "transports": [ 00:15:12.215 { 00:15:12.215 "trtype": "TCP" 00:15:12.215 } 00:15:12.215 ] 00:15:12.215 } 00:15:12.215 ] 00:15:12.215 }' 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:12.215 rmmod nvme_tcp 00:15:12.215 rmmod nvme_fabrics 00:15:12.215 rmmod nvme_keyring 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1401827 ']' 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1401827 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 1401827 ']' 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 1401827 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:12.215 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1401827 00:15:12.475 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:12.475 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:12.475 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1401827' 00:15:12.475 killing process with pid 1401827 00:15:12.475 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 1401827 00:15:12.475 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 1401827 00:15:12.475 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:12.475 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:12.475 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:12.475 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:12.475 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:12.475 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.475 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:12.475 11:03:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:15.019 11:03:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:15.019 00:15:15.019 real 0m32.642s 00:15:15.019 user 1m40.984s 00:15:15.019 sys 0m5.515s 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.019 ************************************ 00:15:15.019 END TEST nvmf_rpc 00:15:15.019 ************************************ 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:15.019 ************************************ 00:15:15.019 START TEST nvmf_invalid 00:15:15.019 ************************************ 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:15.019 * Looking for test storage... 00:15:15.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:15:15.019 11:03:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:20.304 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:20.304 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:15:20.304 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:20.304 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:20.304 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:20.304 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:20.304 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:20.304 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:15:20.304 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:20.304 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:15:20.304 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:15:20.304 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:15:20.304 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:15:20.304 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:15:20.304 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:15:20.304 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:20.304 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:20.304 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:20.304 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:20.304 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:20.304 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:20.304 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:20.304 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:20.304 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:20.304 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:20.304 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:20.304 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:20.304 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:20.304 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:20.305 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:20.305 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:20.305 Found net devices under 0000:86:00.0: cvl_0_0 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:20.305 Found net devices under 0000:86:00.1: cvl_0_1 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:20.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:20.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:15:20.305 00:15:20.305 --- 10.0.0.2 ping statistics --- 00:15:20.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.305 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:20.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:20.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.463 ms 00:15:20.305 00:15:20.305 --- 10.0.0.1 ping statistics --- 00:15:20.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.305 rtt min/avg/max/mdev = 0.463/0.463/0.463/0.000 ms 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1409962 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1409962 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 1409962 ']' 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:20.305 11:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:20.305 [2024-07-26 11:03:39.660619] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:20.305 [2024-07-26 11:03:39.660665] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:20.305 EAL: No free 2048 kB hugepages reported on node 1 00:15:20.305 [2024-07-26 11:03:39.717410] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:20.305 [2024-07-26 11:03:39.797774] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:20.305 [2024-07-26 11:03:39.797809] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:20.305 [2024-07-26 11:03:39.797816] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:20.305 [2024-07-26 11:03:39.797822] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:20.305 [2024-07-26 11:03:39.797827] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:20.305 [2024-07-26 11:03:39.797868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:20.306 [2024-07-26 11:03:39.797963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:20.306 [2024-07-26 11:03:39.798024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:20.306 [2024-07-26 11:03:39.798026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.244 11:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:21.244 11:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:15:21.244 11:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:21.244 11:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:21.244 11:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:21.244 11:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:21.244 11:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:21.244 11:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode26521 00:15:21.244 [2024-07-26 11:03:40.683085] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:15:21.244 11:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:15:21.244 { 00:15:21.244 "nqn": "nqn.2016-06.io.spdk:cnode26521", 00:15:21.244 "tgt_name": "foobar", 00:15:21.244 "method": "nvmf_create_subsystem", 00:15:21.244 "req_id": 1 00:15:21.244 } 00:15:21.244 Got JSON-RPC error response 00:15:21.244 response: 00:15:21.244 { 00:15:21.244 "code": -32603, 00:15:21.244 "message": "Unable to find target foobar" 00:15:21.244 }' 00:15:21.244 11:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:15:21.244 { 00:15:21.244 "nqn": "nqn.2016-06.io.spdk:cnode26521", 00:15:21.244 "tgt_name": "foobar", 00:15:21.244 "method": "nvmf_create_subsystem", 00:15:21.244 "req_id": 1 00:15:21.244 } 00:15:21.244 Got JSON-RPC error response 00:15:21.244 response: 00:15:21.244 { 00:15:21.244 "code": -32603, 00:15:21.244 "message": "Unable to find target foobar" 00:15:21.244 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:15:21.244 11:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:15:21.244 11:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode8680 00:15:21.504 [2024-07-26 11:03:40.883808] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8680: invalid serial number 'SPDKISFASTANDAWESOME' 00:15:21.504 11:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:15:21.504 { 00:15:21.504 "nqn": "nqn.2016-06.io.spdk:cnode8680", 00:15:21.504 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:21.504 "method": "nvmf_create_subsystem", 00:15:21.504 "req_id": 1 00:15:21.504 } 00:15:21.504 Got JSON-RPC error response 00:15:21.504 response: 00:15:21.504 { 00:15:21.504 "code": -32602, 00:15:21.504 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:21.504 }' 00:15:21.504 11:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:15:21.504 { 00:15:21.504 "nqn": "nqn.2016-06.io.spdk:cnode8680", 00:15:21.504 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:21.504 "method": "nvmf_create_subsystem", 00:15:21.504 "req_id": 1 00:15:21.504 } 00:15:21.504 Got JSON-RPC error response 00:15:21.504 response: 00:15:21.504 { 00:15:21.504 "code": -32602, 00:15:21.504 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:21.504 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:21.504 11:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:15:21.504 11:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode4685 00:15:21.764 [2024-07-26 11:03:41.056347] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4685: invalid model number 'SPDK_Controller' 00:15:21.764 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:15:21.764 { 00:15:21.764 "nqn": "nqn.2016-06.io.spdk:cnode4685", 00:15:21.764 "model_number": "SPDK_Controller\u001f", 00:15:21.764 "method": "nvmf_create_subsystem", 00:15:21.764 "req_id": 1 00:15:21.764 } 00:15:21.764 Got JSON-RPC error response 00:15:21.764 response: 00:15:21.764 { 00:15:21.764 "code": -32602, 00:15:21.764 "message": "Invalid MN SPDK_Controller\u001f" 00:15:21.764 }' 00:15:21.764 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:15:21.764 { 00:15:21.764 "nqn": "nqn.2016-06.io.spdk:cnode4685", 00:15:21.764 "model_number": "SPDK_Controller\u001f", 00:15:21.764 "method": "nvmf_create_subsystem", 00:15:21.764 "req_id": 1 00:15:21.764 } 00:15:21.764 Got JSON-RPC error response 00:15:21.764 response: 00:15:21.764 { 00:15:21.764 "code": -32602, 00:15:21.764 "message": "Invalid MN SPDK_Controller\u001f" 00:15:21.764 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:21.764 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:15:21.764 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:15:21.764 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:21.764 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:21.764 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:21.764 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:21.764 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.764 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:15:21.764 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:15:21.764 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:15:21.764 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.764 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.764 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:15:21.764 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:15:21.764 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:15:21.764 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.764 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.764 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:15:21.764 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:15:21.764 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:15:21.764 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.764 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.764 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:15:21.764 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:15:21.764 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:15:21.764 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.764 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.764 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:15:21.764 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:15:21.764 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:15:21.764 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.764 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.764 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ Z == \- ]] 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Z:}pA9,h'\''^4%QndB-Nd1~' 00:15:21.765 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'Z:}pA9,h'\''^4%QndB-Nd1~' nqn.2016-06.io.spdk:cnode17266 00:15:22.025 [2024-07-26 11:03:41.381451] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17266: invalid serial number 'Z:}pA9,h'^4%QndB-Nd1~' 00:15:22.025 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:15:22.025 { 00:15:22.025 "nqn": "nqn.2016-06.io.spdk:cnode17266", 00:15:22.025 "serial_number": "Z:}pA9,h'\''^4%QndB-Nd1~", 00:15:22.025 "method": "nvmf_create_subsystem", 00:15:22.025 "req_id": 1 00:15:22.025 } 00:15:22.025 Got JSON-RPC error response 00:15:22.025 response: 00:15:22.025 { 00:15:22.025 "code": -32602, 00:15:22.025 "message": "Invalid SN Z:}pA9,h'\''^4%QndB-Nd1~" 00:15:22.025 }' 00:15:22.025 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:15:22.025 { 00:15:22.025 "nqn": "nqn.2016-06.io.spdk:cnode17266", 00:15:22.025 "serial_number": "Z:}pA9,h'^4%QndB-Nd1~", 00:15:22.025 "method": "nvmf_create_subsystem", 00:15:22.025 "req_id": 1 00:15:22.025 } 00:15:22.025 Got JSON-RPC error response 00:15:22.025 response: 00:15:22.025 { 00:15:22.025 "code": -32602, 00:15:22.025 "message": "Invalid SN Z:}pA9,h'^4%QndB-Nd1~" 00:15:22.025 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:22.025 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:15:22.025 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:15:22.025 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:22.025 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:22.025 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:22.025 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.026 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:15:22.286 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 6 == \- ]] 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '6cV'\''UjR~TF\Y"53NL&pD|oJ:~+4P(4TiF?Z|`%?]G' 00:15:22.287 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '6cV'\''UjR~TF\Y"53NL&pD|oJ:~+4P(4TiF?Z|`%?]G' nqn.2016-06.io.spdk:cnode24638 00:15:22.546 [2024-07-26 11:03:41.838993] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24638: invalid model number '6cV'UjR~TF\Y"53NL&pD|oJ:~+4P(4TiF?Z|`%?]G' 00:15:22.546 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:15:22.546 { 00:15:22.546 "nqn": "nqn.2016-06.io.spdk:cnode24638", 00:15:22.546 "model_number": "6cV'\''UjR~TF\\Y\"53NL&pD|oJ:~+4P(4TiF?Z|`%?]G", 00:15:22.546 "method": "nvmf_create_subsystem", 00:15:22.546 "req_id": 1 00:15:22.546 } 00:15:22.546 Got JSON-RPC error response 00:15:22.546 response: 00:15:22.546 { 00:15:22.546 "code": -32602, 00:15:22.546 "message": "Invalid MN 6cV'\''UjR~TF\\Y\"53NL&pD|oJ:~+4P(4TiF?Z|`%?]G" 00:15:22.546 }' 00:15:22.546 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:15:22.546 { 00:15:22.546 "nqn": "nqn.2016-06.io.spdk:cnode24638", 00:15:22.546 "model_number": "6cV'UjR~TF\\Y\"53NL&pD|oJ:~+4P(4TiF?Z|`%?]G", 00:15:22.546 "method": "nvmf_create_subsystem", 00:15:22.546 "req_id": 1 00:15:22.546 } 00:15:22.546 Got JSON-RPC error response 00:15:22.546 response: 00:15:22.546 { 00:15:22.546 "code": -32602, 00:15:22.546 "message": "Invalid MN 6cV'UjR~TF\\Y\"53NL&pD|oJ:~+4P(4TiF?Z|`%?]G" 00:15:22.546 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:22.546 11:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:15:22.546 [2024-07-26 11:03:42.035723] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:22.806 11:03:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:15:22.806 11:03:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:15:22.806 11:03:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:15:22.806 11:03:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:15:22.806 11:03:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:15:22.806 11:03:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:15:23.065 [2024-07-26 11:03:42.410405] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:15:23.065 11:03:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:15:23.065 { 00:15:23.065 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:23.065 "listen_address": { 00:15:23.065 "trtype": "tcp", 00:15:23.065 "traddr": "", 00:15:23.065 "trsvcid": "4421" 00:15:23.065 }, 00:15:23.065 "method": "nvmf_subsystem_remove_listener", 00:15:23.065 "req_id": 1 00:15:23.065 } 00:15:23.065 Got JSON-RPC error response 00:15:23.065 response: 00:15:23.065 { 00:15:23.065 "code": -32602, 00:15:23.065 "message": "Invalid parameters" 00:15:23.065 }' 00:15:23.065 11:03:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:15:23.065 { 00:15:23.065 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:23.065 "listen_address": { 00:15:23.065 "trtype": "tcp", 00:15:23.065 "traddr": "", 00:15:23.065 "trsvcid": "4421" 00:15:23.065 }, 00:15:23.065 "method": "nvmf_subsystem_remove_listener", 00:15:23.065 "req_id": 1 00:15:23.065 } 00:15:23.065 Got JSON-RPC error response 00:15:23.065 response: 00:15:23.065 { 00:15:23.065 "code": -32602, 00:15:23.065 "message": "Invalid parameters" 00:15:23.065 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:15:23.065 11:03:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16327 -i 0 00:15:23.324 [2024-07-26 11:03:42.590951] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16327: invalid cntlid range [0-65519] 00:15:23.324 11:03:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:15:23.324 { 00:15:23.324 "nqn": "nqn.2016-06.io.spdk:cnode16327", 00:15:23.324 "min_cntlid": 0, 00:15:23.324 "method": "nvmf_create_subsystem", 00:15:23.324 "req_id": 1 00:15:23.324 } 00:15:23.324 Got JSON-RPC error response 00:15:23.324 response: 00:15:23.324 { 00:15:23.324 "code": -32602, 00:15:23.324 "message": "Invalid cntlid range [0-65519]" 00:15:23.324 }' 00:15:23.324 11:03:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:15:23.324 { 00:15:23.324 "nqn": "nqn.2016-06.io.spdk:cnode16327", 00:15:23.324 "min_cntlid": 0, 00:15:23.324 "method": "nvmf_create_subsystem", 00:15:23.324 "req_id": 1 00:15:23.324 } 00:15:23.324 Got JSON-RPC error response 00:15:23.324 response: 00:15:23.324 { 00:15:23.324 "code": -32602, 00:15:23.324 "message": "Invalid cntlid range [0-65519]" 00:15:23.324 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:23.324 11:03:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3282 -i 65520 00:15:23.325 [2024-07-26 11:03:42.775573] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3282: invalid cntlid range [65520-65519] 00:15:23.325 11:03:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:15:23.325 { 00:15:23.325 "nqn": "nqn.2016-06.io.spdk:cnode3282", 00:15:23.325 "min_cntlid": 65520, 00:15:23.325 "method": "nvmf_create_subsystem", 00:15:23.325 "req_id": 1 00:15:23.325 } 00:15:23.325 Got JSON-RPC error response 00:15:23.325 response: 00:15:23.325 { 00:15:23.325 "code": -32602, 00:15:23.325 "message": "Invalid cntlid range [65520-65519]" 00:15:23.325 }' 00:15:23.325 11:03:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:15:23.325 { 00:15:23.325 "nqn": "nqn.2016-06.io.spdk:cnode3282", 00:15:23.325 "min_cntlid": 65520, 00:15:23.325 "method": "nvmf_create_subsystem", 00:15:23.325 "req_id": 1 00:15:23.325 } 00:15:23.325 Got JSON-RPC error response 00:15:23.325 response: 00:15:23.325 { 00:15:23.325 "code": -32602, 00:15:23.325 "message": "Invalid cntlid range [65520-65519]" 00:15:23.325 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:23.325 11:03:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18720 -I 0 00:15:23.584 [2024-07-26 11:03:42.960229] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18720: invalid cntlid range [1-0] 00:15:23.584 11:03:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:15:23.584 { 00:15:23.584 "nqn": "nqn.2016-06.io.spdk:cnode18720", 00:15:23.584 "max_cntlid": 0, 00:15:23.584 "method": "nvmf_create_subsystem", 00:15:23.584 "req_id": 1 00:15:23.584 } 00:15:23.584 Got JSON-RPC error response 00:15:23.584 response: 00:15:23.584 { 00:15:23.584 "code": -32602, 00:15:23.584 "message": "Invalid cntlid range [1-0]" 00:15:23.584 }' 00:15:23.584 11:03:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:15:23.584 { 00:15:23.584 "nqn": "nqn.2016-06.io.spdk:cnode18720", 00:15:23.584 "max_cntlid": 0, 00:15:23.584 "method": "nvmf_create_subsystem", 00:15:23.584 "req_id": 1 00:15:23.584 } 00:15:23.584 Got JSON-RPC error response 00:15:23.584 response: 00:15:23.584 { 00:15:23.584 "code": -32602, 00:15:23.584 "message": "Invalid cntlid range [1-0]" 00:15:23.584 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:23.584 11:03:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32231 -I 65520 00:15:23.844 [2024-07-26 11:03:43.140793] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32231: invalid cntlid range [1-65520] 00:15:23.844 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:15:23.844 { 00:15:23.844 "nqn": "nqn.2016-06.io.spdk:cnode32231", 00:15:23.844 "max_cntlid": 65520, 00:15:23.844 "method": "nvmf_create_subsystem", 00:15:23.844 "req_id": 1 00:15:23.844 } 00:15:23.844 Got JSON-RPC error response 00:15:23.844 response: 00:15:23.844 { 00:15:23.844 "code": -32602, 00:15:23.844 "message": "Invalid cntlid range [1-65520]" 00:15:23.844 }' 00:15:23.844 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:15:23.844 { 00:15:23.844 "nqn": "nqn.2016-06.io.spdk:cnode32231", 00:15:23.844 "max_cntlid": 65520, 00:15:23.844 "method": "nvmf_create_subsystem", 00:15:23.844 "req_id": 1 00:15:23.844 } 00:15:23.844 Got JSON-RPC error response 00:15:23.844 response: 00:15:23.844 { 00:15:23.844 "code": -32602, 00:15:23.844 "message": "Invalid cntlid range [1-65520]" 00:15:23.844 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:23.844 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19924 -i 6 -I 5 00:15:23.844 [2024-07-26 11:03:43.333460] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19924: invalid cntlid range [6-5] 00:15:24.111 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:15:24.111 { 00:15:24.111 "nqn": "nqn.2016-06.io.spdk:cnode19924", 00:15:24.111 "min_cntlid": 6, 00:15:24.111 "max_cntlid": 5, 00:15:24.111 "method": "nvmf_create_subsystem", 00:15:24.111 "req_id": 1 00:15:24.111 } 00:15:24.111 Got JSON-RPC error response 00:15:24.111 response: 00:15:24.111 { 00:15:24.111 "code": -32602, 00:15:24.111 "message": "Invalid cntlid range [6-5]" 00:15:24.111 }' 00:15:24.111 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:15:24.111 { 00:15:24.111 "nqn": "nqn.2016-06.io.spdk:cnode19924", 00:15:24.111 "min_cntlid": 6, 00:15:24.111 "max_cntlid": 5, 00:15:24.111 "method": "nvmf_create_subsystem", 00:15:24.111 "req_id": 1 00:15:24.111 } 00:15:24.111 Got JSON-RPC error response 00:15:24.111 response: 00:15:24.111 { 00:15:24.111 "code": -32602, 00:15:24.111 "message": "Invalid cntlid range [6-5]" 00:15:24.111 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:24.111 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:15:24.111 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:15:24.111 { 00:15:24.111 "name": "foobar", 00:15:24.111 "method": "nvmf_delete_target", 00:15:24.111 "req_id": 1 00:15:24.111 } 00:15:24.111 Got JSON-RPC error response 00:15:24.111 response: 00:15:24.111 { 00:15:24.111 "code": -32602, 00:15:24.111 "message": "The specified target doesn'\''t exist, cannot delete it." 00:15:24.111 }' 00:15:24.111 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:15:24.111 { 00:15:24.111 "name": "foobar", 00:15:24.111 "method": "nvmf_delete_target", 00:15:24.111 "req_id": 1 00:15:24.111 } 00:15:24.111 Got JSON-RPC error response 00:15:24.111 response: 00:15:24.111 { 00:15:24.111 "code": -32602, 00:15:24.111 "message": "The specified target doesn't exist, cannot delete it." 00:15:24.111 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:15:24.111 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:15:24.111 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:15:24.111 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:24.111 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:15:24.111 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:24.111 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:15:24.111 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:24.111 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:24.111 rmmod nvme_tcp 00:15:24.111 rmmod nvme_fabrics 00:15:24.111 rmmod nvme_keyring 00:15:24.111 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:24.111 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:15:24.111 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:15:24.111 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1409962 ']' 00:15:24.111 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1409962 00:15:24.111 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 1409962 ']' 00:15:24.111 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 1409962 00:15:24.111 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:15:24.111 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:24.111 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1409962 00:15:24.111 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:24.111 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:24.111 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1409962' 00:15:24.111 killing process with pid 1409962 00:15:24.111 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 1409962 00:15:24.111 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 1409962 00:15:24.416 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:24.416 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:24.416 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:24.416 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:24.416 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:24.416 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:24.416 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:24.416 11:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.960 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:26.960 00:15:26.960 real 0m11.782s 00:15:26.960 user 0m19.634s 00:15:26.960 sys 0m5.059s 00:15:26.960 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:26.960 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:26.960 ************************************ 00:15:26.960 END TEST nvmf_invalid 00:15:26.960 ************************************ 00:15:26.960 11:03:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:26.960 11:03:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:26.960 11:03:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:26.960 11:03:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:26.960 ************************************ 00:15:26.960 START TEST nvmf_connect_stress 00:15:26.960 ************************************ 00:15:26.960 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:26.960 * Looking for test storage... 00:15:26.960 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:26.960 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:26.960 11:03:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:15:26.960 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:26.960 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:26.960 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:26.960 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:26.960 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:26.960 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:26.960 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:26.960 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:26.960 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:26.960 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:26.960 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:26.960 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:26.960 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:26.960 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:26.960 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:26.960 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:26.960 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:26.960 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:26.960 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:26.960 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:26.960 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.960 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.960 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.960 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:15:26.961 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.961 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:15:26.961 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:26.961 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:26.961 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:26.961 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:26.961 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:26.961 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:26.961 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:26.961 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:26.961 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:26.961 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:26.961 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:26.961 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:26.961 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:26.961 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:26.961 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:26.961 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:26.961 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.961 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:26.961 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:26.961 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:15:26.961 11:03:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:32.245 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:32.245 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:15:32.245 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:32.245 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:32.245 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:32.245 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:32.245 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:32.245 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:15:32.245 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:32.245 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:15:32.245 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:15:32.245 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:15:32.245 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:15:32.245 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:15:32.245 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:15:32.245 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:32.245 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:32.245 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:32.245 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:32.245 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:32.245 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:32.246 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:32.246 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:32.246 Found net devices under 0000:86:00.0: cvl_0_0 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:32.246 Found net devices under 0000:86:00.1: cvl_0_1 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:32.246 11:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:32.246 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:32.246 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:32.246 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:32.246 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:32.246 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:15:32.246 00:15:32.246 --- 10.0.0.2 ping statistics --- 00:15:32.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.246 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:15:32.246 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:32.246 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:32.246 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.409 ms 00:15:32.246 00:15:32.246 --- 10.0.0.1 ping statistics --- 00:15:32.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.246 rtt min/avg/max/mdev = 0.409/0.409/0.409/0.000 ms 00:15:32.246 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:32.246 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:15:32.246 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:32.246 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:32.246 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:32.246 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:32.246 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:32.246 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:32.246 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:32.246 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:32.246 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:32.246 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:32.246 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:32.246 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1414128 00:15:32.246 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1414128 00:15:32.246 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 1414128 ']' 00:15:32.246 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.246 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:32.246 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:32.246 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.247 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:32.247 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:32.247 [2024-07-26 11:03:51.144902] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:32.247 [2024-07-26 11:03:51.144946] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:32.247 EAL: No free 2048 kB hugepages reported on node 1 00:15:32.247 [2024-07-26 11:03:51.200689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:32.247 [2024-07-26 11:03:51.272822] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:32.247 [2024-07-26 11:03:51.272859] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:32.247 [2024-07-26 11:03:51.272865] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:32.247 [2024-07-26 11:03:51.272871] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:32.247 [2024-07-26 11:03:51.272876] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:32.247 [2024-07-26 11:03:51.272974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:32.247 [2024-07-26 11:03:51.272993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:32.247 [2024-07-26 11:03:51.272994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:32.508 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:32.508 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:15:32.508 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:32.508 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:32.508 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:32.508 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:32.508 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:32.508 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.508 11:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:32.508 [2024-07-26 11:03:51.985789] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:32.768 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.768 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:32.768 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.768 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:32.768 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:32.769 [2024-07-26 11:03:52.021074] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:32.769 NULL1 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1414374 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:32.769 EAL: No free 2048 kB hugepages reported on node 1 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1414374 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.769 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:33.029 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.029 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1414374 00:15:33.029 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:33.029 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.029 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:33.289 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.289 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1414374 00:15:33.289 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:33.289 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.289 11:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:33.859 11:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.859 11:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1414374 00:15:33.859 11:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:33.859 11:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.859 11:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:34.118 11:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.118 11:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1414374 00:15:34.118 11:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:34.118 11:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.118 11:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:34.378 11:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.378 11:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1414374 00:15:34.378 11:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:34.378 11:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.378 11:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:34.638 11:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.638 11:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1414374 00:15:34.638 11:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:34.638 11:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.638 11:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:35.208 11:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.208 11:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1414374 00:15:35.208 11:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:35.208 11:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.208 11:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:35.468 11:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.468 11:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1414374 00:15:35.468 11:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:35.468 11:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.468 11:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:35.728 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.728 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1414374 00:15:35.728 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:35.728 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.728 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:35.987 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.987 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1414374 00:15:35.987 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:35.987 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.987 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:36.246 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.246 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1414374 00:15:36.246 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:36.246 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.246 11:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:36.815 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.815 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1414374 00:15:36.815 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:36.815 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.815 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:37.074 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.074 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1414374 00:15:37.074 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:37.074 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.074 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:37.334 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.334 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1414374 00:15:37.334 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:37.334 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.334 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:37.595 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.595 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1414374 00:15:37.595 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:37.595 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.595 11:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:37.854 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.854 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1414374 00:15:37.854 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:37.854 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.854 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.425 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.425 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1414374 00:15:38.425 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:38.425 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.425 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.685 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.685 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1414374 00:15:38.685 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:38.685 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.685 11:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:38.944 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.944 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1414374 00:15:38.944 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:38.944 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.944 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:39.204 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.204 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1414374 00:15:39.204 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:39.204 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.204 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:39.464 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.464 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1414374 00:15:39.464 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:39.464 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.464 11:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.033 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.033 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1414374 00:15:40.033 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:40.033 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.033 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.292 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.293 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1414374 00:15:40.293 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:40.293 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.293 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.553 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.553 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1414374 00:15:40.553 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:40.553 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.553 11:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:40.812 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.812 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1414374 00:15:40.812 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:40.812 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.812 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.381 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.381 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1414374 00:15:41.381 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.381 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.381 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.640 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.640 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1414374 00:15:41.641 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.641 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.641 11:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:41.899 11:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.899 11:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1414374 00:15:41.899 11:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:41.899 11:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.899 11:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.159 11:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.159 11:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1414374 00:15:42.159 11:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:42.159 11:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.159 11:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.419 11:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.419 11:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1414374 00:15:42.419 11:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:42.419 11:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.419 11:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:42.678 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:42.938 11:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.938 11:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1414374 00:15:42.938 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1414374) - No such process 00:15:42.938 11:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1414374 00:15:42.938 11:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:42.938 11:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:42.938 11:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:42.938 11:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:42.938 11:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:15:42.938 11:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:42.939 11:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:15:42.939 11:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:42.939 11:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:42.939 rmmod nvme_tcp 00:15:42.939 rmmod nvme_fabrics 00:15:42.939 rmmod nvme_keyring 00:15:42.939 11:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:42.939 11:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:15:42.939 11:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:15:42.939 11:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1414128 ']' 00:15:42.939 11:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1414128 00:15:42.939 11:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 1414128 ']' 00:15:42.939 11:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 1414128 00:15:42.939 11:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:15:42.939 11:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:42.939 11:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1414128 00:15:42.939 11:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:42.939 11:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:42.939 11:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1414128' 00:15:42.939 killing process with pid 1414128 00:15:42.939 11:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 1414128 00:15:42.939 11:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 1414128 00:15:43.199 11:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:43.199 11:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:43.199 11:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:43.199 11:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:43.199 11:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:43.199 11:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.199 11:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:43.199 11:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.110 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:45.110 00:15:45.110 real 0m18.671s 00:15:45.110 user 0m40.684s 00:15:45.110 sys 0m7.824s 00:15:45.110 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:45.110 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:45.110 ************************************ 00:15:45.110 END TEST nvmf_connect_stress 00:15:45.110 ************************************ 00:15:45.406 11:04:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:45.406 11:04:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:45.406 11:04:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:45.406 11:04:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:45.406 ************************************ 00:15:45.406 START TEST nvmf_fused_ordering 00:15:45.406 ************************************ 00:15:45.406 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:45.406 * Looking for test storage... 00:15:45.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:45.406 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:45.406 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:15:45.406 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:45.406 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:45.406 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:45.406 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:45.406 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:45.406 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:45.406 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:45.406 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:45.406 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:45.406 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:45.406 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:45.406 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:45.406 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:45.406 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:45.406 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:45.406 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:45.406 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:45.406 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:45.406 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:45.406 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:45.406 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.406 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.406 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.406 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:15:45.406 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.406 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:15:45.406 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:45.407 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:45.407 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:45.407 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:45.407 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:45.407 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:45.407 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:45.407 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:45.407 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:45.407 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:45.407 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:45.407 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:45.407 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:45.407 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:45.407 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.407 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:45.407 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.407 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:45.407 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:45.407 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:15:45.407 11:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:50.695 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:50.695 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:50.695 Found net devices under 0000:86:00.0: cvl_0_0 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:50.695 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:50.696 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:50.696 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:50.696 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:50.696 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:50.696 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:50.696 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:50.696 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:50.696 Found net devices under 0000:86:00.1: cvl_0_1 00:15:50.696 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:50.696 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:50.696 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:15:50.696 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:50.696 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:50.696 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:50.696 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:50.696 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:50.696 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:50.696 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:50.696 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:50.696 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:50.696 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:50.696 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:50.696 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:50.696 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:50.696 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:50.696 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:50.696 11:04:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:50.696 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:50.696 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:50.696 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:50.696 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:50.696 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:50.696 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:50.696 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:50.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:50.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:15:50.957 00:15:50.957 --- 10.0.0.2 ping statistics --- 00:15:50.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.957 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:15:50.957 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:50.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:50.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.373 ms 00:15:50.957 00:15:50.957 --- 10.0.0.1 ping statistics --- 00:15:50.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.957 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:15:50.957 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:50.957 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:15:50.957 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:50.957 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:50.957 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:50.957 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:50.957 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:50.957 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:50.957 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:50.957 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:50.957 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:50.957 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:50.957 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:50.957 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1419509 00:15:50.957 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1419509 00:15:50.957 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:50.957 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 1419509 ']' 00:15:50.957 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.957 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:50.957 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.957 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:50.957 11:04:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:50.957 [2024-07-26 11:04:10.285582] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:50.957 [2024-07-26 11:04:10.285627] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:50.957 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.957 [2024-07-26 11:04:10.344536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.957 [2024-07-26 11:04:10.414302] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:50.957 [2024-07-26 11:04:10.414342] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:50.957 [2024-07-26 11:04:10.414349] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:50.957 [2024-07-26 11:04:10.414354] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:50.957 [2024-07-26 11:04:10.414359] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:50.957 [2024-07-26 11:04:10.414376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:51.898 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:51.898 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:15:51.898 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:51.898 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:51.898 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:51.898 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:51.898 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:51.898 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.898 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:51.898 [2024-07-26 11:04:11.129459] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:51.898 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.898 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:51.898 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.898 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:51.898 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.898 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:51.898 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.898 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:51.898 [2024-07-26 11:04:11.149637] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:51.898 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.898 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:51.898 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.898 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:51.898 NULL1 00:15:51.898 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.898 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:51.898 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.898 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:51.898 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.898 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:51.898 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.898 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:51.898 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.898 11:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:51.898 [2024-07-26 11:04:11.204083] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:51.898 [2024-07-26 11:04:11.204114] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1419621 ] 00:15:51.898 EAL: No free 2048 kB hugepages reported on node 1 00:15:52.839 Attached to nqn.2016-06.io.spdk:cnode1 00:15:52.839 Namespace ID: 1 size: 1GB 00:15:52.839 fused_ordering(0) 00:15:52.839 fused_ordering(1) 00:15:52.839 fused_ordering(2) 00:15:52.839 fused_ordering(3) 00:15:52.839 fused_ordering(4) 00:15:52.839 fused_ordering(5) 00:15:52.839 fused_ordering(6) 00:15:52.839 fused_ordering(7) 00:15:52.839 fused_ordering(8) 00:15:52.839 fused_ordering(9) 00:15:52.839 fused_ordering(10) 00:15:52.839 fused_ordering(11) 00:15:52.839 fused_ordering(12) 00:15:52.839 fused_ordering(13) 00:15:52.839 fused_ordering(14) 00:15:52.839 fused_ordering(15) 00:15:52.839 fused_ordering(16) 00:15:52.839 fused_ordering(17) 00:15:52.839 fused_ordering(18) 00:15:52.839 fused_ordering(19) 00:15:52.839 fused_ordering(20) 00:15:52.839 fused_ordering(21) 00:15:52.839 fused_ordering(22) 00:15:52.839 fused_ordering(23) 00:15:52.839 fused_ordering(24) 00:15:52.839 fused_ordering(25) 00:15:52.839 fused_ordering(26) 00:15:52.839 fused_ordering(27) 00:15:52.839 fused_ordering(28) 00:15:52.839 fused_ordering(29) 00:15:52.839 fused_ordering(30) 00:15:52.839 fused_ordering(31) 00:15:52.839 fused_ordering(32) 00:15:52.839 fused_ordering(33) 00:15:52.839 fused_ordering(34) 00:15:52.839 fused_ordering(35) 00:15:52.839 fused_ordering(36) 00:15:52.839 fused_ordering(37) 00:15:52.839 fused_ordering(38) 00:15:52.839 fused_ordering(39) 00:15:52.839 fused_ordering(40) 00:15:52.839 fused_ordering(41) 00:15:52.839 fused_ordering(42) 00:15:52.839 fused_ordering(43) 00:15:52.839 fused_ordering(44) 00:15:52.839 fused_ordering(45) 00:15:52.839 fused_ordering(46) 00:15:52.839 fused_ordering(47) 00:15:52.839 fused_ordering(48) 00:15:52.839 fused_ordering(49) 00:15:52.839 fused_ordering(50) 00:15:52.839 fused_ordering(51) 00:15:52.839 fused_ordering(52) 00:15:52.839 fused_ordering(53) 00:15:52.839 fused_ordering(54) 00:15:52.839 fused_ordering(55) 00:15:52.839 fused_ordering(56) 00:15:52.839 fused_ordering(57) 00:15:52.839 fused_ordering(58) 00:15:52.839 fused_ordering(59) 00:15:52.839 fused_ordering(60) 00:15:52.839 fused_ordering(61) 00:15:52.839 fused_ordering(62) 00:15:52.839 fused_ordering(63) 00:15:52.839 fused_ordering(64) 00:15:52.839 fused_ordering(65) 00:15:52.839 fused_ordering(66) 00:15:52.839 fused_ordering(67) 00:15:52.839 fused_ordering(68) 00:15:52.839 fused_ordering(69) 00:15:52.839 fused_ordering(70) 00:15:52.839 fused_ordering(71) 00:15:52.839 fused_ordering(72) 00:15:52.839 fused_ordering(73) 00:15:52.839 fused_ordering(74) 00:15:52.839 fused_ordering(75) 00:15:52.839 fused_ordering(76) 00:15:52.839 fused_ordering(77) 00:15:52.839 fused_ordering(78) 00:15:52.839 fused_ordering(79) 00:15:52.839 fused_ordering(80) 00:15:52.839 fused_ordering(81) 00:15:52.839 fused_ordering(82) 00:15:52.840 fused_ordering(83) 00:15:52.840 fused_ordering(84) 00:15:52.840 fused_ordering(85) 00:15:52.840 fused_ordering(86) 00:15:52.840 fused_ordering(87) 00:15:52.840 fused_ordering(88) 00:15:52.840 fused_ordering(89) 00:15:52.840 fused_ordering(90) 00:15:52.840 fused_ordering(91) 00:15:52.840 fused_ordering(92) 00:15:52.840 fused_ordering(93) 00:15:52.840 fused_ordering(94) 00:15:52.840 fused_ordering(95) 00:15:52.840 fused_ordering(96) 00:15:52.840 fused_ordering(97) 00:15:52.840 fused_ordering(98) 00:15:52.840 fused_ordering(99) 00:15:52.840 fused_ordering(100) 00:15:52.840 fused_ordering(101) 00:15:52.840 fused_ordering(102) 00:15:52.840 fused_ordering(103) 00:15:52.840 fused_ordering(104) 00:15:52.840 fused_ordering(105) 00:15:52.840 fused_ordering(106) 00:15:52.840 fused_ordering(107) 00:15:52.840 fused_ordering(108) 00:15:52.840 fused_ordering(109) 00:15:52.840 fused_ordering(110) 00:15:52.840 fused_ordering(111) 00:15:52.840 fused_ordering(112) 00:15:52.840 fused_ordering(113) 00:15:52.840 fused_ordering(114) 00:15:52.840 fused_ordering(115) 00:15:52.840 fused_ordering(116) 00:15:52.840 fused_ordering(117) 00:15:52.840 fused_ordering(118) 00:15:52.840 fused_ordering(119) 00:15:52.840 fused_ordering(120) 00:15:52.840 fused_ordering(121) 00:15:52.840 fused_ordering(122) 00:15:52.840 fused_ordering(123) 00:15:52.840 fused_ordering(124) 00:15:52.840 fused_ordering(125) 00:15:52.840 fused_ordering(126) 00:15:52.840 fused_ordering(127) 00:15:52.840 fused_ordering(128) 00:15:52.840 fused_ordering(129) 00:15:52.840 fused_ordering(130) 00:15:52.840 fused_ordering(131) 00:15:52.840 fused_ordering(132) 00:15:52.840 fused_ordering(133) 00:15:52.840 fused_ordering(134) 00:15:52.840 fused_ordering(135) 00:15:52.840 fused_ordering(136) 00:15:52.840 fused_ordering(137) 00:15:52.840 fused_ordering(138) 00:15:52.840 fused_ordering(139) 00:15:52.840 fused_ordering(140) 00:15:52.840 fused_ordering(141) 00:15:52.840 fused_ordering(142) 00:15:52.840 fused_ordering(143) 00:15:52.840 fused_ordering(144) 00:15:52.840 fused_ordering(145) 00:15:52.840 fused_ordering(146) 00:15:52.840 fused_ordering(147) 00:15:52.840 fused_ordering(148) 00:15:52.840 fused_ordering(149) 00:15:52.840 fused_ordering(150) 00:15:52.840 fused_ordering(151) 00:15:52.840 fused_ordering(152) 00:15:52.840 fused_ordering(153) 00:15:52.840 fused_ordering(154) 00:15:52.840 fused_ordering(155) 00:15:52.840 fused_ordering(156) 00:15:52.840 fused_ordering(157) 00:15:52.840 fused_ordering(158) 00:15:52.840 fused_ordering(159) 00:15:52.840 fused_ordering(160) 00:15:52.840 fused_ordering(161) 00:15:52.840 fused_ordering(162) 00:15:52.840 fused_ordering(163) 00:15:52.840 fused_ordering(164) 00:15:52.840 fused_ordering(165) 00:15:52.840 fused_ordering(166) 00:15:52.840 fused_ordering(167) 00:15:52.840 fused_ordering(168) 00:15:52.840 fused_ordering(169) 00:15:52.840 fused_ordering(170) 00:15:52.840 fused_ordering(171) 00:15:52.840 fused_ordering(172) 00:15:52.840 fused_ordering(173) 00:15:52.840 fused_ordering(174) 00:15:52.840 fused_ordering(175) 00:15:52.840 fused_ordering(176) 00:15:52.840 fused_ordering(177) 00:15:52.840 fused_ordering(178) 00:15:52.840 fused_ordering(179) 00:15:52.840 fused_ordering(180) 00:15:52.840 fused_ordering(181) 00:15:52.840 fused_ordering(182) 00:15:52.840 fused_ordering(183) 00:15:52.840 fused_ordering(184) 00:15:52.840 fused_ordering(185) 00:15:52.840 fused_ordering(186) 00:15:52.840 fused_ordering(187) 00:15:52.840 fused_ordering(188) 00:15:52.840 fused_ordering(189) 00:15:52.840 fused_ordering(190) 00:15:52.840 fused_ordering(191) 00:15:52.840 fused_ordering(192) 00:15:52.840 fused_ordering(193) 00:15:52.840 fused_ordering(194) 00:15:52.840 fused_ordering(195) 00:15:52.840 fused_ordering(196) 00:15:52.840 fused_ordering(197) 00:15:52.840 fused_ordering(198) 00:15:52.840 fused_ordering(199) 00:15:52.840 fused_ordering(200) 00:15:52.840 fused_ordering(201) 00:15:52.840 fused_ordering(202) 00:15:52.840 fused_ordering(203) 00:15:52.840 fused_ordering(204) 00:15:52.840 fused_ordering(205) 00:15:53.779 fused_ordering(206) 00:15:53.779 fused_ordering(207) 00:15:53.779 fused_ordering(208) 00:15:53.779 fused_ordering(209) 00:15:53.779 fused_ordering(210) 00:15:53.779 fused_ordering(211) 00:15:53.779 fused_ordering(212) 00:15:53.779 fused_ordering(213) 00:15:53.779 fused_ordering(214) 00:15:53.779 fused_ordering(215) 00:15:53.779 fused_ordering(216) 00:15:53.779 fused_ordering(217) 00:15:53.779 fused_ordering(218) 00:15:53.779 fused_ordering(219) 00:15:53.779 fused_ordering(220) 00:15:53.779 fused_ordering(221) 00:15:53.779 fused_ordering(222) 00:15:53.779 fused_ordering(223) 00:15:53.779 fused_ordering(224) 00:15:53.779 fused_ordering(225) 00:15:53.779 fused_ordering(226) 00:15:53.779 fused_ordering(227) 00:15:53.779 fused_ordering(228) 00:15:53.779 fused_ordering(229) 00:15:53.779 fused_ordering(230) 00:15:53.779 fused_ordering(231) 00:15:53.779 fused_ordering(232) 00:15:53.779 fused_ordering(233) 00:15:53.779 fused_ordering(234) 00:15:53.779 fused_ordering(235) 00:15:53.780 fused_ordering(236) 00:15:53.780 fused_ordering(237) 00:15:53.780 fused_ordering(238) 00:15:53.780 fused_ordering(239) 00:15:53.780 fused_ordering(240) 00:15:53.780 fused_ordering(241) 00:15:53.780 fused_ordering(242) 00:15:53.780 fused_ordering(243) 00:15:53.780 fused_ordering(244) 00:15:53.780 fused_ordering(245) 00:15:53.780 fused_ordering(246) 00:15:53.780 fused_ordering(247) 00:15:53.780 fused_ordering(248) 00:15:53.780 fused_ordering(249) 00:15:53.780 fused_ordering(250) 00:15:53.780 fused_ordering(251) 00:15:53.780 fused_ordering(252) 00:15:53.780 fused_ordering(253) 00:15:53.780 fused_ordering(254) 00:15:53.780 fused_ordering(255) 00:15:53.780 fused_ordering(256) 00:15:53.780 fused_ordering(257) 00:15:53.780 fused_ordering(258) 00:15:53.780 fused_ordering(259) 00:15:53.780 fused_ordering(260) 00:15:53.780 fused_ordering(261) 00:15:53.780 fused_ordering(262) 00:15:53.780 fused_ordering(263) 00:15:53.780 fused_ordering(264) 00:15:53.780 fused_ordering(265) 00:15:53.780 fused_ordering(266) 00:15:53.780 fused_ordering(267) 00:15:53.780 fused_ordering(268) 00:15:53.780 fused_ordering(269) 00:15:53.780 fused_ordering(270) 00:15:53.780 fused_ordering(271) 00:15:53.780 fused_ordering(272) 00:15:53.780 fused_ordering(273) 00:15:53.780 fused_ordering(274) 00:15:53.780 fused_ordering(275) 00:15:53.780 fused_ordering(276) 00:15:53.780 fused_ordering(277) 00:15:53.780 fused_ordering(278) 00:15:53.780 fused_ordering(279) 00:15:53.780 fused_ordering(280) 00:15:53.780 fused_ordering(281) 00:15:53.780 fused_ordering(282) 00:15:53.780 fused_ordering(283) 00:15:53.780 fused_ordering(284) 00:15:53.780 fused_ordering(285) 00:15:53.780 fused_ordering(286) 00:15:53.780 fused_ordering(287) 00:15:53.780 fused_ordering(288) 00:15:53.780 fused_ordering(289) 00:15:53.780 fused_ordering(290) 00:15:53.780 fused_ordering(291) 00:15:53.780 fused_ordering(292) 00:15:53.780 fused_ordering(293) 00:15:53.780 fused_ordering(294) 00:15:53.780 fused_ordering(295) 00:15:53.780 fused_ordering(296) 00:15:53.780 fused_ordering(297) 00:15:53.780 fused_ordering(298) 00:15:53.780 fused_ordering(299) 00:15:53.780 fused_ordering(300) 00:15:53.780 fused_ordering(301) 00:15:53.780 fused_ordering(302) 00:15:53.780 fused_ordering(303) 00:15:53.780 fused_ordering(304) 00:15:53.780 fused_ordering(305) 00:15:53.780 fused_ordering(306) 00:15:53.780 fused_ordering(307) 00:15:53.780 fused_ordering(308) 00:15:53.780 fused_ordering(309) 00:15:53.780 fused_ordering(310) 00:15:53.780 fused_ordering(311) 00:15:53.780 fused_ordering(312) 00:15:53.780 fused_ordering(313) 00:15:53.780 fused_ordering(314) 00:15:53.780 fused_ordering(315) 00:15:53.780 fused_ordering(316) 00:15:53.780 fused_ordering(317) 00:15:53.780 fused_ordering(318) 00:15:53.780 fused_ordering(319) 00:15:53.780 fused_ordering(320) 00:15:53.780 fused_ordering(321) 00:15:53.780 fused_ordering(322) 00:15:53.780 fused_ordering(323) 00:15:53.780 fused_ordering(324) 00:15:53.780 fused_ordering(325) 00:15:53.780 fused_ordering(326) 00:15:53.780 fused_ordering(327) 00:15:53.780 fused_ordering(328) 00:15:53.780 fused_ordering(329) 00:15:53.780 fused_ordering(330) 00:15:53.780 fused_ordering(331) 00:15:53.780 fused_ordering(332) 00:15:53.780 fused_ordering(333) 00:15:53.780 fused_ordering(334) 00:15:53.780 fused_ordering(335) 00:15:53.780 fused_ordering(336) 00:15:53.780 fused_ordering(337) 00:15:53.780 fused_ordering(338) 00:15:53.780 fused_ordering(339) 00:15:53.780 fused_ordering(340) 00:15:53.780 fused_ordering(341) 00:15:53.780 fused_ordering(342) 00:15:53.780 fused_ordering(343) 00:15:53.780 fused_ordering(344) 00:15:53.780 fused_ordering(345) 00:15:53.780 fused_ordering(346) 00:15:53.780 fused_ordering(347) 00:15:53.780 fused_ordering(348) 00:15:53.780 fused_ordering(349) 00:15:53.780 fused_ordering(350) 00:15:53.780 fused_ordering(351) 00:15:53.780 fused_ordering(352) 00:15:53.780 fused_ordering(353) 00:15:53.780 fused_ordering(354) 00:15:53.780 fused_ordering(355) 00:15:53.780 fused_ordering(356) 00:15:53.780 fused_ordering(357) 00:15:53.780 fused_ordering(358) 00:15:53.780 fused_ordering(359) 00:15:53.780 fused_ordering(360) 00:15:53.780 fused_ordering(361) 00:15:53.780 fused_ordering(362) 00:15:53.780 fused_ordering(363) 00:15:53.780 fused_ordering(364) 00:15:53.780 fused_ordering(365) 00:15:53.780 fused_ordering(366) 00:15:53.780 fused_ordering(367) 00:15:53.780 fused_ordering(368) 00:15:53.780 fused_ordering(369) 00:15:53.780 fused_ordering(370) 00:15:53.780 fused_ordering(371) 00:15:53.780 fused_ordering(372) 00:15:53.780 fused_ordering(373) 00:15:53.780 fused_ordering(374) 00:15:53.780 fused_ordering(375) 00:15:53.780 fused_ordering(376) 00:15:53.780 fused_ordering(377) 00:15:53.780 fused_ordering(378) 00:15:53.780 fused_ordering(379) 00:15:53.780 fused_ordering(380) 00:15:53.780 fused_ordering(381) 00:15:53.780 fused_ordering(382) 00:15:53.780 fused_ordering(383) 00:15:53.780 fused_ordering(384) 00:15:53.780 fused_ordering(385) 00:15:53.780 fused_ordering(386) 00:15:53.780 fused_ordering(387) 00:15:53.780 fused_ordering(388) 00:15:53.780 fused_ordering(389) 00:15:53.780 fused_ordering(390) 00:15:53.780 fused_ordering(391) 00:15:53.780 fused_ordering(392) 00:15:53.780 fused_ordering(393) 00:15:53.780 fused_ordering(394) 00:15:53.780 fused_ordering(395) 00:15:53.780 fused_ordering(396) 00:15:53.780 fused_ordering(397) 00:15:53.780 fused_ordering(398) 00:15:53.780 fused_ordering(399) 00:15:53.780 fused_ordering(400) 00:15:53.780 fused_ordering(401) 00:15:53.780 fused_ordering(402) 00:15:53.780 fused_ordering(403) 00:15:53.780 fused_ordering(404) 00:15:53.780 fused_ordering(405) 00:15:53.780 fused_ordering(406) 00:15:53.780 fused_ordering(407) 00:15:53.780 fused_ordering(408) 00:15:53.780 fused_ordering(409) 00:15:53.780 fused_ordering(410) 00:15:55.180 fused_ordering(411) 00:15:55.180 fused_ordering(412) 00:15:55.180 fused_ordering(413) 00:15:55.180 fused_ordering(414) 00:15:55.180 fused_ordering(415) 00:15:55.180 fused_ordering(416) 00:15:55.180 fused_ordering(417) 00:15:55.180 fused_ordering(418) 00:15:55.180 fused_ordering(419) 00:15:55.180 fused_ordering(420) 00:15:55.180 fused_ordering(421) 00:15:55.180 fused_ordering(422) 00:15:55.180 fused_ordering(423) 00:15:55.180 fused_ordering(424) 00:15:55.180 fused_ordering(425) 00:15:55.180 fused_ordering(426) 00:15:55.180 fused_ordering(427) 00:15:55.180 fused_ordering(428) 00:15:55.180 fused_ordering(429) 00:15:55.180 fused_ordering(430) 00:15:55.180 fused_ordering(431) 00:15:55.180 fused_ordering(432) 00:15:55.180 fused_ordering(433) 00:15:55.180 fused_ordering(434) 00:15:55.180 fused_ordering(435) 00:15:55.180 fused_ordering(436) 00:15:55.180 fused_ordering(437) 00:15:55.180 fused_ordering(438) 00:15:55.180 fused_ordering(439) 00:15:55.180 fused_ordering(440) 00:15:55.180 fused_ordering(441) 00:15:55.180 fused_ordering(442) 00:15:55.180 fused_ordering(443) 00:15:55.180 fused_ordering(444) 00:15:55.180 fused_ordering(445) 00:15:55.180 fused_ordering(446) 00:15:55.180 fused_ordering(447) 00:15:55.180 fused_ordering(448) 00:15:55.180 fused_ordering(449) 00:15:55.180 fused_ordering(450) 00:15:55.180 fused_ordering(451) 00:15:55.180 fused_ordering(452) 00:15:55.180 fused_ordering(453) 00:15:55.180 fused_ordering(454) 00:15:55.180 fused_ordering(455) 00:15:55.180 fused_ordering(456) 00:15:55.180 fused_ordering(457) 00:15:55.180 fused_ordering(458) 00:15:55.180 fused_ordering(459) 00:15:55.180 fused_ordering(460) 00:15:55.180 fused_ordering(461) 00:15:55.180 fused_ordering(462) 00:15:55.180 fused_ordering(463) 00:15:55.180 fused_ordering(464) 00:15:55.180 fused_ordering(465) 00:15:55.180 fused_ordering(466) 00:15:55.180 fused_ordering(467) 00:15:55.180 fused_ordering(468) 00:15:55.180 fused_ordering(469) 00:15:55.180 fused_ordering(470) 00:15:55.180 fused_ordering(471) 00:15:55.180 fused_ordering(472) 00:15:55.180 fused_ordering(473) 00:15:55.180 fused_ordering(474) 00:15:55.180 fused_ordering(475) 00:15:55.180 fused_ordering(476) 00:15:55.180 fused_ordering(477) 00:15:55.180 fused_ordering(478) 00:15:55.180 fused_ordering(479) 00:15:55.180 fused_ordering(480) 00:15:55.180 fused_ordering(481) 00:15:55.180 fused_ordering(482) 00:15:55.180 fused_ordering(483) 00:15:55.180 fused_ordering(484) 00:15:55.180 fused_ordering(485) 00:15:55.180 fused_ordering(486) 00:15:55.180 fused_ordering(487) 00:15:55.180 fused_ordering(488) 00:15:55.180 fused_ordering(489) 00:15:55.180 fused_ordering(490) 00:15:55.180 fused_ordering(491) 00:15:55.180 fused_ordering(492) 00:15:55.180 fused_ordering(493) 00:15:55.180 fused_ordering(494) 00:15:55.180 fused_ordering(495) 00:15:55.180 fused_ordering(496) 00:15:55.180 fused_ordering(497) 00:15:55.180 fused_ordering(498) 00:15:55.180 fused_ordering(499) 00:15:55.180 fused_ordering(500) 00:15:55.180 fused_ordering(501) 00:15:55.180 fused_ordering(502) 00:15:55.180 fused_ordering(503) 00:15:55.180 fused_ordering(504) 00:15:55.180 fused_ordering(505) 00:15:55.180 fused_ordering(506) 00:15:55.180 fused_ordering(507) 00:15:55.180 fused_ordering(508) 00:15:55.180 fused_ordering(509) 00:15:55.180 fused_ordering(510) 00:15:55.180 fused_ordering(511) 00:15:55.180 fused_ordering(512) 00:15:55.180 fused_ordering(513) 00:15:55.180 fused_ordering(514) 00:15:55.180 fused_ordering(515) 00:15:55.180 fused_ordering(516) 00:15:55.180 fused_ordering(517) 00:15:55.180 fused_ordering(518) 00:15:55.180 fused_ordering(519) 00:15:55.180 fused_ordering(520) 00:15:55.180 fused_ordering(521) 00:15:55.180 fused_ordering(522) 00:15:55.180 fused_ordering(523) 00:15:55.180 fused_ordering(524) 00:15:55.180 fused_ordering(525) 00:15:55.180 fused_ordering(526) 00:15:55.180 fused_ordering(527) 00:15:55.180 fused_ordering(528) 00:15:55.180 fused_ordering(529) 00:15:55.180 fused_ordering(530) 00:15:55.180 fused_ordering(531) 00:15:55.180 fused_ordering(532) 00:15:55.181 fused_ordering(533) 00:15:55.181 fused_ordering(534) 00:15:55.181 fused_ordering(535) 00:15:55.181 fused_ordering(536) 00:15:55.181 fused_ordering(537) 00:15:55.181 fused_ordering(538) 00:15:55.181 fused_ordering(539) 00:15:55.181 fused_ordering(540) 00:15:55.181 fused_ordering(541) 00:15:55.181 fused_ordering(542) 00:15:55.181 fused_ordering(543) 00:15:55.181 fused_ordering(544) 00:15:55.181 fused_ordering(545) 00:15:55.181 fused_ordering(546) 00:15:55.181 fused_ordering(547) 00:15:55.181 fused_ordering(548) 00:15:55.181 fused_ordering(549) 00:15:55.181 fused_ordering(550) 00:15:55.181 fused_ordering(551) 00:15:55.181 fused_ordering(552) 00:15:55.181 fused_ordering(553) 00:15:55.181 fused_ordering(554) 00:15:55.181 fused_ordering(555) 00:15:55.181 fused_ordering(556) 00:15:55.181 fused_ordering(557) 00:15:55.181 fused_ordering(558) 00:15:55.181 fused_ordering(559) 00:15:55.181 fused_ordering(560) 00:15:55.181 fused_ordering(561) 00:15:55.181 fused_ordering(562) 00:15:55.181 fused_ordering(563) 00:15:55.181 fused_ordering(564) 00:15:55.181 fused_ordering(565) 00:15:55.181 fused_ordering(566) 00:15:55.181 fused_ordering(567) 00:15:55.181 fused_ordering(568) 00:15:55.181 fused_ordering(569) 00:15:55.181 fused_ordering(570) 00:15:55.181 fused_ordering(571) 00:15:55.181 fused_ordering(572) 00:15:55.181 fused_ordering(573) 00:15:55.181 fused_ordering(574) 00:15:55.181 fused_ordering(575) 00:15:55.181 fused_ordering(576) 00:15:55.181 fused_ordering(577) 00:15:55.181 fused_ordering(578) 00:15:55.181 fused_ordering(579) 00:15:55.181 fused_ordering(580) 00:15:55.181 fused_ordering(581) 00:15:55.181 fused_ordering(582) 00:15:55.181 fused_ordering(583) 00:15:55.181 fused_ordering(584) 00:15:55.181 fused_ordering(585) 00:15:55.181 fused_ordering(586) 00:15:55.181 fused_ordering(587) 00:15:55.181 fused_ordering(588) 00:15:55.181 fused_ordering(589) 00:15:55.181 fused_ordering(590) 00:15:55.181 fused_ordering(591) 00:15:55.181 fused_ordering(592) 00:15:55.181 fused_ordering(593) 00:15:55.181 fused_ordering(594) 00:15:55.181 fused_ordering(595) 00:15:55.181 fused_ordering(596) 00:15:55.181 fused_ordering(597) 00:15:55.181 fused_ordering(598) 00:15:55.181 fused_ordering(599) 00:15:55.181 fused_ordering(600) 00:15:55.181 fused_ordering(601) 00:15:55.181 fused_ordering(602) 00:15:55.181 fused_ordering(603) 00:15:55.181 fused_ordering(604) 00:15:55.181 fused_ordering(605) 00:15:55.181 fused_ordering(606) 00:15:55.181 fused_ordering(607) 00:15:55.181 fused_ordering(608) 00:15:55.181 fused_ordering(609) 00:15:55.181 fused_ordering(610) 00:15:55.181 fused_ordering(611) 00:15:55.181 fused_ordering(612) 00:15:55.181 fused_ordering(613) 00:15:55.181 fused_ordering(614) 00:15:55.181 fused_ordering(615) 00:15:56.122 fused_ordering(616) 00:15:56.122 fused_ordering(617) 00:15:56.122 fused_ordering(618) 00:15:56.122 fused_ordering(619) 00:15:56.122 fused_ordering(620) 00:15:56.122 fused_ordering(621) 00:15:56.122 fused_ordering(622) 00:15:56.122 fused_ordering(623) 00:15:56.122 fused_ordering(624) 00:15:56.122 fused_ordering(625) 00:15:56.122 fused_ordering(626) 00:15:56.122 fused_ordering(627) 00:15:56.122 fused_ordering(628) 00:15:56.122 fused_ordering(629) 00:15:56.122 fused_ordering(630) 00:15:56.122 fused_ordering(631) 00:15:56.122 fused_ordering(632) 00:15:56.122 fused_ordering(633) 00:15:56.122 fused_ordering(634) 00:15:56.122 fused_ordering(635) 00:15:56.122 fused_ordering(636) 00:15:56.122 fused_ordering(637) 00:15:56.122 fused_ordering(638) 00:15:56.122 fused_ordering(639) 00:15:56.122 fused_ordering(640) 00:15:56.122 fused_ordering(641) 00:15:56.122 fused_ordering(642) 00:15:56.122 fused_ordering(643) 00:15:56.122 fused_ordering(644) 00:15:56.122 fused_ordering(645) 00:15:56.122 fused_ordering(646) 00:15:56.122 fused_ordering(647) 00:15:56.122 fused_ordering(648) 00:15:56.122 fused_ordering(649) 00:15:56.122 fused_ordering(650) 00:15:56.122 fused_ordering(651) 00:15:56.122 fused_ordering(652) 00:15:56.122 fused_ordering(653) 00:15:56.122 fused_ordering(654) 00:15:56.122 fused_ordering(655) 00:15:56.122 fused_ordering(656) 00:15:56.122 fused_ordering(657) 00:15:56.122 fused_ordering(658) 00:15:56.122 fused_ordering(659) 00:15:56.122 fused_ordering(660) 00:15:56.122 fused_ordering(661) 00:15:56.122 fused_ordering(662) 00:15:56.122 fused_ordering(663) 00:15:56.122 fused_ordering(664) 00:15:56.122 fused_ordering(665) 00:15:56.122 fused_ordering(666) 00:15:56.122 fused_ordering(667) 00:15:56.122 fused_ordering(668) 00:15:56.122 fused_ordering(669) 00:15:56.122 fused_ordering(670) 00:15:56.122 fused_ordering(671) 00:15:56.122 fused_ordering(672) 00:15:56.122 fused_ordering(673) 00:15:56.122 fused_ordering(674) 00:15:56.122 fused_ordering(675) 00:15:56.122 fused_ordering(676) 00:15:56.122 fused_ordering(677) 00:15:56.122 fused_ordering(678) 00:15:56.122 fused_ordering(679) 00:15:56.122 fused_ordering(680) 00:15:56.122 fused_ordering(681) 00:15:56.122 fused_ordering(682) 00:15:56.122 fused_ordering(683) 00:15:56.122 fused_ordering(684) 00:15:56.122 fused_ordering(685) 00:15:56.122 fused_ordering(686) 00:15:56.122 fused_ordering(687) 00:15:56.122 fused_ordering(688) 00:15:56.122 fused_ordering(689) 00:15:56.122 fused_ordering(690) 00:15:56.122 fused_ordering(691) 00:15:56.122 fused_ordering(692) 00:15:56.122 fused_ordering(693) 00:15:56.122 fused_ordering(694) 00:15:56.122 fused_ordering(695) 00:15:56.122 fused_ordering(696) 00:15:56.122 fused_ordering(697) 00:15:56.122 fused_ordering(698) 00:15:56.122 fused_ordering(699) 00:15:56.122 fused_ordering(700) 00:15:56.122 fused_ordering(701) 00:15:56.122 fused_ordering(702) 00:15:56.122 fused_ordering(703) 00:15:56.122 fused_ordering(704) 00:15:56.122 fused_ordering(705) 00:15:56.122 fused_ordering(706) 00:15:56.122 fused_ordering(707) 00:15:56.122 fused_ordering(708) 00:15:56.122 fused_ordering(709) 00:15:56.122 fused_ordering(710) 00:15:56.122 fused_ordering(711) 00:15:56.122 fused_ordering(712) 00:15:56.122 fused_ordering(713) 00:15:56.122 fused_ordering(714) 00:15:56.122 fused_ordering(715) 00:15:56.122 fused_ordering(716) 00:15:56.122 fused_ordering(717) 00:15:56.123 fused_ordering(718) 00:15:56.123 fused_ordering(719) 00:15:56.123 fused_ordering(720) 00:15:56.123 fused_ordering(721) 00:15:56.123 fused_ordering(722) 00:15:56.123 fused_ordering(723) 00:15:56.123 fused_ordering(724) 00:15:56.123 fused_ordering(725) 00:15:56.123 fused_ordering(726) 00:15:56.123 fused_ordering(727) 00:15:56.123 fused_ordering(728) 00:15:56.123 fused_ordering(729) 00:15:56.123 fused_ordering(730) 00:15:56.123 fused_ordering(731) 00:15:56.123 fused_ordering(732) 00:15:56.123 fused_ordering(733) 00:15:56.123 fused_ordering(734) 00:15:56.123 fused_ordering(735) 00:15:56.123 fused_ordering(736) 00:15:56.123 fused_ordering(737) 00:15:56.123 fused_ordering(738) 00:15:56.123 fused_ordering(739) 00:15:56.123 fused_ordering(740) 00:15:56.123 fused_ordering(741) 00:15:56.123 fused_ordering(742) 00:15:56.123 fused_ordering(743) 00:15:56.123 fused_ordering(744) 00:15:56.123 fused_ordering(745) 00:15:56.123 fused_ordering(746) 00:15:56.123 fused_ordering(747) 00:15:56.123 fused_ordering(748) 00:15:56.123 fused_ordering(749) 00:15:56.123 fused_ordering(750) 00:15:56.123 fused_ordering(751) 00:15:56.123 fused_ordering(752) 00:15:56.123 fused_ordering(753) 00:15:56.123 fused_ordering(754) 00:15:56.123 fused_ordering(755) 00:15:56.123 fused_ordering(756) 00:15:56.123 fused_ordering(757) 00:15:56.123 fused_ordering(758) 00:15:56.123 fused_ordering(759) 00:15:56.123 fused_ordering(760) 00:15:56.123 fused_ordering(761) 00:15:56.123 fused_ordering(762) 00:15:56.123 fused_ordering(763) 00:15:56.123 fused_ordering(764) 00:15:56.123 fused_ordering(765) 00:15:56.123 fused_ordering(766) 00:15:56.123 fused_ordering(767) 00:15:56.123 fused_ordering(768) 00:15:56.123 fused_ordering(769) 00:15:56.123 fused_ordering(770) 00:15:56.123 fused_ordering(771) 00:15:56.123 fused_ordering(772) 00:15:56.123 fused_ordering(773) 00:15:56.123 fused_ordering(774) 00:15:56.123 fused_ordering(775) 00:15:56.123 fused_ordering(776) 00:15:56.123 fused_ordering(777) 00:15:56.123 fused_ordering(778) 00:15:56.123 fused_ordering(779) 00:15:56.123 fused_ordering(780) 00:15:56.123 fused_ordering(781) 00:15:56.123 fused_ordering(782) 00:15:56.123 fused_ordering(783) 00:15:56.123 fused_ordering(784) 00:15:56.123 fused_ordering(785) 00:15:56.123 fused_ordering(786) 00:15:56.123 fused_ordering(787) 00:15:56.123 fused_ordering(788) 00:15:56.123 fused_ordering(789) 00:15:56.123 fused_ordering(790) 00:15:56.123 fused_ordering(791) 00:15:56.123 fused_ordering(792) 00:15:56.123 fused_ordering(793) 00:15:56.123 fused_ordering(794) 00:15:56.123 fused_ordering(795) 00:15:56.123 fused_ordering(796) 00:15:56.123 fused_ordering(797) 00:15:56.123 fused_ordering(798) 00:15:56.123 fused_ordering(799) 00:15:56.123 fused_ordering(800) 00:15:56.123 fused_ordering(801) 00:15:56.123 fused_ordering(802) 00:15:56.123 fused_ordering(803) 00:15:56.123 fused_ordering(804) 00:15:56.123 fused_ordering(805) 00:15:56.123 fused_ordering(806) 00:15:56.123 fused_ordering(807) 00:15:56.123 fused_ordering(808) 00:15:56.123 fused_ordering(809) 00:15:56.123 fused_ordering(810) 00:15:56.123 fused_ordering(811) 00:15:56.123 fused_ordering(812) 00:15:56.123 fused_ordering(813) 00:15:56.123 fused_ordering(814) 00:15:56.123 fused_ordering(815) 00:15:56.123 fused_ordering(816) 00:15:56.123 fused_ordering(817) 00:15:56.123 fused_ordering(818) 00:15:56.123 fused_ordering(819) 00:15:56.123 fused_ordering(820) 00:15:57.064 fused_ordering(821) 00:15:57.064 fused_ordering(822) 00:15:57.064 fused_ordering(823) 00:15:57.064 fused_ordering(824) 00:15:57.064 fused_ordering(825) 00:15:57.064 fused_ordering(826) 00:15:57.064 fused_ordering(827) 00:15:57.064 fused_ordering(828) 00:15:57.064 fused_ordering(829) 00:15:57.064 fused_ordering(830) 00:15:57.064 fused_ordering(831) 00:15:57.064 fused_ordering(832) 00:15:57.064 fused_ordering(833) 00:15:57.064 fused_ordering(834) 00:15:57.064 fused_ordering(835) 00:15:57.064 fused_ordering(836) 00:15:57.064 fused_ordering(837) 00:15:57.064 fused_ordering(838) 00:15:57.064 fused_ordering(839) 00:15:57.064 fused_ordering(840) 00:15:57.064 fused_ordering(841) 00:15:57.064 fused_ordering(842) 00:15:57.064 fused_ordering(843) 00:15:57.064 fused_ordering(844) 00:15:57.064 fused_ordering(845) 00:15:57.064 fused_ordering(846) 00:15:57.064 fused_ordering(847) 00:15:57.064 fused_ordering(848) 00:15:57.064 fused_ordering(849) 00:15:57.064 fused_ordering(850) 00:15:57.064 fused_ordering(851) 00:15:57.064 fused_ordering(852) 00:15:57.064 fused_ordering(853) 00:15:57.064 fused_ordering(854) 00:15:57.064 fused_ordering(855) 00:15:57.064 fused_ordering(856) 00:15:57.064 fused_ordering(857) 00:15:57.064 fused_ordering(858) 00:15:57.064 fused_ordering(859) 00:15:57.064 fused_ordering(860) 00:15:57.064 fused_ordering(861) 00:15:57.064 fused_ordering(862) 00:15:57.064 fused_ordering(863) 00:15:57.064 fused_ordering(864) 00:15:57.064 fused_ordering(865) 00:15:57.064 fused_ordering(866) 00:15:57.064 fused_ordering(867) 00:15:57.064 fused_ordering(868) 00:15:57.064 fused_ordering(869) 00:15:57.064 fused_ordering(870) 00:15:57.064 fused_ordering(871) 00:15:57.064 fused_ordering(872) 00:15:57.064 fused_ordering(873) 00:15:57.064 fused_ordering(874) 00:15:57.064 fused_ordering(875) 00:15:57.064 fused_ordering(876) 00:15:57.064 fused_ordering(877) 00:15:57.064 fused_ordering(878) 00:15:57.064 fused_ordering(879) 00:15:57.064 fused_ordering(880) 00:15:57.064 fused_ordering(881) 00:15:57.064 fused_ordering(882) 00:15:57.064 fused_ordering(883) 00:15:57.064 fused_ordering(884) 00:15:57.064 fused_ordering(885) 00:15:57.064 fused_ordering(886) 00:15:57.064 fused_ordering(887) 00:15:57.064 fused_ordering(888) 00:15:57.064 fused_ordering(889) 00:15:57.065 fused_ordering(890) 00:15:57.065 fused_ordering(891) 00:15:57.065 fused_ordering(892) 00:15:57.065 fused_ordering(893) 00:15:57.065 fused_ordering(894) 00:15:57.065 fused_ordering(895) 00:15:57.065 fused_ordering(896) 00:15:57.065 fused_ordering(897) 00:15:57.065 fused_ordering(898) 00:15:57.065 fused_ordering(899) 00:15:57.065 fused_ordering(900) 00:15:57.065 fused_ordering(901) 00:15:57.065 fused_ordering(902) 00:15:57.065 fused_ordering(903) 00:15:57.065 fused_ordering(904) 00:15:57.065 fused_ordering(905) 00:15:57.065 fused_ordering(906) 00:15:57.065 fused_ordering(907) 00:15:57.065 fused_ordering(908) 00:15:57.065 fused_ordering(909) 00:15:57.065 fused_ordering(910) 00:15:57.065 fused_ordering(911) 00:15:57.065 fused_ordering(912) 00:15:57.065 fused_ordering(913) 00:15:57.065 fused_ordering(914) 00:15:57.065 fused_ordering(915) 00:15:57.065 fused_ordering(916) 00:15:57.065 fused_ordering(917) 00:15:57.065 fused_ordering(918) 00:15:57.065 fused_ordering(919) 00:15:57.065 fused_ordering(920) 00:15:57.065 fused_ordering(921) 00:15:57.065 fused_ordering(922) 00:15:57.065 fused_ordering(923) 00:15:57.065 fused_ordering(924) 00:15:57.065 fused_ordering(925) 00:15:57.065 fused_ordering(926) 00:15:57.065 fused_ordering(927) 00:15:57.065 fused_ordering(928) 00:15:57.065 fused_ordering(929) 00:15:57.065 fused_ordering(930) 00:15:57.065 fused_ordering(931) 00:15:57.065 fused_ordering(932) 00:15:57.065 fused_ordering(933) 00:15:57.065 fused_ordering(934) 00:15:57.065 fused_ordering(935) 00:15:57.065 fused_ordering(936) 00:15:57.065 fused_ordering(937) 00:15:57.065 fused_ordering(938) 00:15:57.065 fused_ordering(939) 00:15:57.065 fused_ordering(940) 00:15:57.065 fused_ordering(941) 00:15:57.065 fused_ordering(942) 00:15:57.065 fused_ordering(943) 00:15:57.065 fused_ordering(944) 00:15:57.065 fused_ordering(945) 00:15:57.065 fused_ordering(946) 00:15:57.065 fused_ordering(947) 00:15:57.065 fused_ordering(948) 00:15:57.065 fused_ordering(949) 00:15:57.065 fused_ordering(950) 00:15:57.065 fused_ordering(951) 00:15:57.065 fused_ordering(952) 00:15:57.065 fused_ordering(953) 00:15:57.065 fused_ordering(954) 00:15:57.065 fused_ordering(955) 00:15:57.065 fused_ordering(956) 00:15:57.065 fused_ordering(957) 00:15:57.065 fused_ordering(958) 00:15:57.065 fused_ordering(959) 00:15:57.065 fused_ordering(960) 00:15:57.065 fused_ordering(961) 00:15:57.065 fused_ordering(962) 00:15:57.065 fused_ordering(963) 00:15:57.065 fused_ordering(964) 00:15:57.065 fused_ordering(965) 00:15:57.065 fused_ordering(966) 00:15:57.065 fused_ordering(967) 00:15:57.065 fused_ordering(968) 00:15:57.065 fused_ordering(969) 00:15:57.065 fused_ordering(970) 00:15:57.065 fused_ordering(971) 00:15:57.065 fused_ordering(972) 00:15:57.065 fused_ordering(973) 00:15:57.065 fused_ordering(974) 00:15:57.065 fused_ordering(975) 00:15:57.065 fused_ordering(976) 00:15:57.065 fused_ordering(977) 00:15:57.065 fused_ordering(978) 00:15:57.065 fused_ordering(979) 00:15:57.065 fused_ordering(980) 00:15:57.065 fused_ordering(981) 00:15:57.065 fused_ordering(982) 00:15:57.065 fused_ordering(983) 00:15:57.065 fused_ordering(984) 00:15:57.065 fused_ordering(985) 00:15:57.065 fused_ordering(986) 00:15:57.065 fused_ordering(987) 00:15:57.065 fused_ordering(988) 00:15:57.065 fused_ordering(989) 00:15:57.065 fused_ordering(990) 00:15:57.065 fused_ordering(991) 00:15:57.065 fused_ordering(992) 00:15:57.065 fused_ordering(993) 00:15:57.065 fused_ordering(994) 00:15:57.065 fused_ordering(995) 00:15:57.065 fused_ordering(996) 00:15:57.065 fused_ordering(997) 00:15:57.065 fused_ordering(998) 00:15:57.065 fused_ordering(999) 00:15:57.065 fused_ordering(1000) 00:15:57.065 fused_ordering(1001) 00:15:57.065 fused_ordering(1002) 00:15:57.065 fused_ordering(1003) 00:15:57.065 fused_ordering(1004) 00:15:57.065 fused_ordering(1005) 00:15:57.065 fused_ordering(1006) 00:15:57.065 fused_ordering(1007) 00:15:57.065 fused_ordering(1008) 00:15:57.065 fused_ordering(1009) 00:15:57.065 fused_ordering(1010) 00:15:57.065 fused_ordering(1011) 00:15:57.065 fused_ordering(1012) 00:15:57.065 fused_ordering(1013) 00:15:57.065 fused_ordering(1014) 00:15:57.065 fused_ordering(1015) 00:15:57.065 fused_ordering(1016) 00:15:57.065 fused_ordering(1017) 00:15:57.065 fused_ordering(1018) 00:15:57.065 fused_ordering(1019) 00:15:57.065 fused_ordering(1020) 00:15:57.065 fused_ordering(1021) 00:15:57.065 fused_ordering(1022) 00:15:57.065 fused_ordering(1023) 00:15:57.065 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:57.065 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:57.065 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:57.065 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:15:57.065 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:57.065 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:15:57.065 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:57.065 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:57.065 rmmod nvme_tcp 00:15:57.065 rmmod nvme_fabrics 00:15:57.065 rmmod nvme_keyring 00:15:57.065 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:57.065 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:15:57.065 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:15:57.065 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1419509 ']' 00:15:57.065 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1419509 00:15:57.065 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 1419509 ']' 00:15:57.065 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 1419509 00:15:57.065 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:15:57.065 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:57.065 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1419509 00:15:57.065 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:57.065 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:57.065 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1419509' 00:15:57.065 killing process with pid 1419509 00:15:57.065 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 1419509 00:15:57.065 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 1419509 00:15:57.325 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:57.325 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:57.325 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:57.325 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:57.325 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:57.325 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.325 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:57.325 11:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:59.236 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:59.496 00:15:59.496 real 0m14.084s 00:15:59.496 user 0m9.641s 00:15:59.496 sys 0m7.957s 00:15:59.496 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:59.496 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:59.496 ************************************ 00:15:59.496 END TEST nvmf_fused_ordering 00:15:59.496 ************************************ 00:15:59.496 11:04:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:59.496 11:04:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:59.496 11:04:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:59.496 11:04:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:59.497 ************************************ 00:15:59.497 START TEST nvmf_ns_masking 00:15:59.497 ************************************ 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:59.497 * Looking for test storage... 00:15:59.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=cac78986-ae21-4da5-900d-e45478314777 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=314cac71-b77e-4393-9b3e-dd96e3445be5 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=0aedd24c-e2f7-49a2-9b21-4359d554931f 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:15:59.497 11:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:04.780 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:04.780 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:16:04.780 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:04.780 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:04.780 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:04.780 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:04.780 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:04.780 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:16:04.780 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:04.780 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:16:04.780 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:16:04.780 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:16:04.780 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:16:04.780 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:16:04.780 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:16:04.780 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:04.780 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:04.780 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:04.780 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:04.780 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:04.780 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:04.780 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:04.780 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:04.780 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:04.780 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:04.780 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:04.780 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:04.780 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:04.780 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:04.780 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:04.780 11:04:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:04.780 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:04.781 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:04.781 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:04.781 Found net devices under 0000:86:00.0: cvl_0_0 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:04.781 Found net devices under 0000:86:00.1: cvl_0_1 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:04.781 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:04.781 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:16:04.781 00:16:04.781 --- 10.0.0.2 ping statistics --- 00:16:04.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.781 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:16:04.781 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:05.042 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:05.042 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.354 ms 00:16:05.042 00:16:05.042 --- 10.0.0.1 ping statistics --- 00:16:05.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.042 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:16:05.042 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:05.042 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:16:05.042 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:05.042 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:05.042 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:05.042 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:05.042 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:05.042 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:05.042 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:05.042 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:16:05.042 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:05.042 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:05.042 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:05.042 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:05.042 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1423971 00:16:05.042 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1423971 00:16:05.042 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1423971 ']' 00:16:05.042 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.042 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:05.042 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.042 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:05.042 11:04:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:05.042 [2024-07-26 11:04:24.355031] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:05.042 [2024-07-26 11:04:24.355093] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:05.042 EAL: No free 2048 kB hugepages reported on node 1 00:16:05.042 [2024-07-26 11:04:24.412958] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.042 [2024-07-26 11:04:24.484305] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:05.042 [2024-07-26 11:04:24.484348] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:05.042 [2024-07-26 11:04:24.484355] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:05.042 [2024-07-26 11:04:24.484361] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:05.042 [2024-07-26 11:04:24.484366] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:05.042 [2024-07-26 11:04:24.484384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.982 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:05.982 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:16:05.982 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:05.982 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:05.982 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:05.982 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:05.982 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:05.982 [2024-07-26 11:04:25.344082] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:05.982 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:16:05.982 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:16:05.982 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:06.242 Malloc1 00:16:06.242 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:06.242 Malloc2 00:16:06.502 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:06.502 11:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:16:06.762 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:07.022 [2024-07-26 11:04:26.280336] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:07.022 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:16:07.022 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0aedd24c-e2f7-49a2-9b21-4359d554931f -a 10.0.0.2 -s 4420 -i 4 00:16:07.022 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:16:07.022 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:07.022 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:07.022 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:07.022 11:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:08.946 11:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:08.946 11:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:08.946 11:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:08.946 11:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:08.946 11:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:08.946 11:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:08.946 11:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:08.946 11:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:09.206 11:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:09.206 11:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:09.206 11:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:16:09.206 11:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:09.207 11:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:09.207 [ 0]:0x1 00:16:09.207 11:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:09.207 11:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:09.207 11:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=45c8590b9aec4ce3815ac2107c06f670 00:16:09.207 11:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 45c8590b9aec4ce3815ac2107c06f670 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:09.207 11:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:16:09.207 11:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:16:09.207 11:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:09.207 11:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:09.207 [ 0]:0x1 00:16:09.207 11:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:09.207 11:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:09.467 11:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=45c8590b9aec4ce3815ac2107c06f670 00:16:09.467 11:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 45c8590b9aec4ce3815ac2107c06f670 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:09.467 11:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:16:09.467 11:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:09.467 11:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:09.467 [ 1]:0x2 00:16:09.467 11:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:09.467 11:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:09.467 11:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=656d557dee6d426892037f690a4a5b84 00:16:09.467 11:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 656d557dee6d426892037f690a4a5b84 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:09.467 11:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:16:09.467 11:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:09.467 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.467 11:04:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:09.727 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:16:09.727 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:16:09.727 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0aedd24c-e2f7-49a2-9b21-4359d554931f -a 10.0.0.2 -s 4420 -i 4 00:16:10.005 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:16:10.005 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:10.005 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:10.006 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:16:10.006 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:16:10.006 11:04:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:11.937 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:11.937 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:11.937 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:11.937 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:11.937 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:11.937 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:11.937 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:11.937 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:12.198 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:12.198 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:12.198 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:16:12.198 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:12.198 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:12.198 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:12.198 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:12.198 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:12.198 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:12.198 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:12.198 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:12.198 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:12.198 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:12.198 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:12.198 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:12.198 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:12.198 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:12.198 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:12.198 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:12.198 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:12.198 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:16:12.198 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:12.198 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:12.198 [ 0]:0x2 00:16:12.198 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:12.198 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:12.198 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=656d557dee6d426892037f690a4a5b84 00:16:12.198 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 656d557dee6d426892037f690a4a5b84 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:12.198 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:12.458 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:16:12.458 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:12.458 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:12.458 [ 0]:0x1 00:16:12.458 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:12.458 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:12.458 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=45c8590b9aec4ce3815ac2107c06f670 00:16:12.458 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 45c8590b9aec4ce3815ac2107c06f670 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:12.458 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:16:12.458 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:12.458 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:12.458 [ 1]:0x2 00:16:12.458 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:12.458 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:12.458 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=656d557dee6d426892037f690a4a5b84 00:16:12.458 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 656d557dee6d426892037f690a4a5b84 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:12.458 11:04:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:12.718 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:16:12.718 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:12.718 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:12.718 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:12.718 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:12.718 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:12.718 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:12.718 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:12.718 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:12.718 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:12.718 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:12.718 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:12.718 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:12.718 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:12.718 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:12.718 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:12.718 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:12.718 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:12.718 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:16:12.718 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:12.718 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:12.718 [ 0]:0x2 00:16:12.718 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:12.718 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:12.718 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=656d557dee6d426892037f690a4a5b84 00:16:12.718 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 656d557dee6d426892037f690a4a5b84 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:12.718 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:16:12.718 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:12.718 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.718 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:12.979 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:16:12.979 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0aedd24c-e2f7-49a2-9b21-4359d554931f -a 10.0.0.2 -s 4420 -i 4 00:16:12.979 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:12.979 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:12.979 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:12.979 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:16:12.979 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:16:12.979 11:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:15.518 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:15.518 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:15.518 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:15.518 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:16:15.518 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:15.518 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:15.518 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:15.518 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:15.518 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:15.518 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:15.518 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:16:15.518 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:15.518 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:15.518 [ 0]:0x1 00:16:15.518 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:15.518 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:15.518 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=45c8590b9aec4ce3815ac2107c06f670 00:16:15.518 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 45c8590b9aec4ce3815ac2107c06f670 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:15.518 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:16:15.518 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:15.518 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:15.518 [ 1]:0x2 00:16:15.518 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:15.518 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:15.518 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=656d557dee6d426892037f690a4a5b84 00:16:15.518 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 656d557dee6d426892037f690a4a5b84 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:15.518 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:15.518 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:16:15.518 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:15.518 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:15.518 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:15.518 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:15.518 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:15.518 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:15.518 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:15.518 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:15.519 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:15.519 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:15.519 11:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:15.779 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:15.779 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:15.779 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:15.779 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:15.779 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:15.779 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:15.779 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:16:15.779 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:15.779 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:15.779 [ 0]:0x2 00:16:15.779 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:15.779 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:15.779 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=656d557dee6d426892037f690a4a5b84 00:16:15.779 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 656d557dee6d426892037f690a4a5b84 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:15.779 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:15.779 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:15.779 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:15.779 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:15.779 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:15.780 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:15.780 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:15.780 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:15.780 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:15.780 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:15.780 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:15.780 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:15.780 [2024-07-26 11:04:35.237723] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:16:15.780 request: 00:16:15.780 { 00:16:15.780 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:15.780 "nsid": 2, 00:16:15.780 "host": "nqn.2016-06.io.spdk:host1", 00:16:15.780 "method": "nvmf_ns_remove_host", 00:16:15.780 "req_id": 1 00:16:15.780 } 00:16:15.780 Got JSON-RPC error response 00:16:15.780 response: 00:16:15.780 { 00:16:15.780 "code": -32602, 00:16:15.780 "message": "Invalid parameters" 00:16:15.780 } 00:16:15.780 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:15.780 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:15.780 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:15.780 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:15.780 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:16:15.780 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:16:15.780 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:16:15.780 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:16:15.780 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:15.780 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:16:15.780 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:15.780 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:16:15.780 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:15.780 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:15.780 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:15.780 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:16.040 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:16.040 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:16.040 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:16.040 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:16.040 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:16.040 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:16.040 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:16:16.040 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:16.040 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:16.040 [ 0]:0x2 00:16:16.040 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:16.040 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:16.040 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=656d557dee6d426892037f690a4a5b84 00:16:16.040 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 656d557dee6d426892037f690a4a5b84 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:16.040 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:16:16.040 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:16.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:16.040 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1425971 00:16:16.040 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:16:16.040 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:16:16.040 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1425971 /var/tmp/host.sock 00:16:16.040 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1425971 ']' 00:16:16.040 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:16:16.040 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:16.040 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:16.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:16.040 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:16.040 11:04:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:16.040 [2024-07-26 11:04:35.433707] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:16.040 [2024-07-26 11:04:35.433758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1425971 ] 00:16:16.040 EAL: No free 2048 kB hugepages reported on node 1 00:16:16.040 [2024-07-26 11:04:35.487335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.300 [2024-07-26 11:04:35.562354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:16.869 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:16.869 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:16:16.869 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:17.138 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:17.138 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid cac78986-ae21-4da5-900d-e45478314777 00:16:17.138 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:16:17.138 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g CAC78986AE214DA5900DE45478314777 -i 00:16:17.402 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 314cac71-b77e-4393-9b3e-dd96e3445be5 00:16:17.402 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:16:17.402 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 314CAC71B77E43939B3EDD96E3445BE5 -i 00:16:17.661 11:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:17.661 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:16:17.920 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:17.920 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:18.180 nvme0n1 00:16:18.180 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:18.180 11:04:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:18.748 nvme1n2 00:16:18.748 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:16:18.748 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:16:18.748 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:16:18.748 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:18.748 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:16:18.748 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:16:18.748 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:16:18.748 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:16:18.748 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:16:19.006 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ cac78986-ae21-4da5-900d-e45478314777 == \c\a\c\7\8\9\8\6\-\a\e\2\1\-\4\d\a\5\-\9\0\0\d\-\e\4\5\4\7\8\3\1\4\7\7\7 ]] 00:16:19.006 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:16:19.006 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:16:19.006 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:16:19.266 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 314cac71-b77e-4393-9b3e-dd96e3445be5 == \3\1\4\c\a\c\7\1\-\b\7\7\e\-\4\3\9\3\-\9\b\3\e\-\d\d\9\6\e\3\4\4\5\b\e\5 ]] 00:16:19.266 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1425971 00:16:19.266 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1425971 ']' 00:16:19.266 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1425971 00:16:19.266 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:16:19.266 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:19.266 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1425971 00:16:19.266 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:19.266 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:19.266 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1425971' 00:16:19.266 killing process with pid 1425971 00:16:19.266 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1425971 00:16:19.266 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1425971 00:16:19.525 11:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:19.785 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:16:19.785 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:16:19.785 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:19.785 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:16:19.785 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:19.785 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:16:19.785 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:19.785 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:19.785 rmmod nvme_tcp 00:16:19.785 rmmod nvme_fabrics 00:16:19.785 rmmod nvme_keyring 00:16:19.785 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:19.785 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:16:19.785 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:16:19.785 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1423971 ']' 00:16:19.785 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1423971 00:16:19.785 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1423971 ']' 00:16:19.785 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1423971 00:16:19.785 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:16:19.785 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:19.785 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1423971 00:16:19.785 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:19.785 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:19.785 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1423971' 00:16:19.785 killing process with pid 1423971 00:16:19.785 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1423971 00:16:19.785 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1423971 00:16:20.045 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:20.045 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:20.045 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:20.045 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:20.045 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:20.045 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.045 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:20.045 11:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.591 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:22.591 00:16:22.591 real 0m22.660s 00:16:22.591 user 0m24.515s 00:16:22.591 sys 0m6.031s 00:16:22.591 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:22.591 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:22.591 ************************************ 00:16:22.591 END TEST nvmf_ns_masking 00:16:22.591 ************************************ 00:16:22.591 11:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:16:22.591 11:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:22.591 11:04:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:22.591 11:04:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:22.591 11:04:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:22.591 ************************************ 00:16:22.591 START TEST nvmf_nvme_cli 00:16:22.591 ************************************ 00:16:22.591 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:22.591 * Looking for test storage... 00:16:22.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:22.591 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:22.591 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:16:22.591 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:22.591 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:22.591 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:22.591 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:22.591 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:22.591 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:22.591 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:22.591 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:22.591 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:22.591 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:22.591 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:22.591 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:22.591 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:22.591 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:22.591 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:22.591 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:22.591 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:22.591 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:22.591 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:22.591 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:22.591 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.592 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.592 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.592 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:16:22.592 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.592 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:16:22.592 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:22.592 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:22.592 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:22.592 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:22.592 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:22.592 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:22.592 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:22.592 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:22.592 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:22.592 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:22.592 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:16:22.592 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:16:22.592 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:22.592 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:22.592 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:22.592 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:22.592 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:22.592 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.592 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:22.592 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.592 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:22.592 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:22.592 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:16:22.592 11:04:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:27.901 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:27.901 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:27.901 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:27.902 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:27.902 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:27.902 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:27.902 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:27.902 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:27.902 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:27.902 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:27.902 Found net devices under 0000:86:00.0: cvl_0_0 00:16:27.902 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:27.902 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:27.902 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:27.902 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:27.902 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:27.902 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:27.902 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:27.902 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:27.902 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:27.902 Found net devices under 0000:86:00.1: cvl_0_1 00:16:27.902 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:27.902 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:27.902 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:16:27.902 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:27.902 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:27.902 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:27.902 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:27.902 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:27.902 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:27.902 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:27.902 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:27.902 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:27.902 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:27.902 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:27.902 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:27.902 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:27.902 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:27.902 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:27.902 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:27.902 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:27.902 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:27.902 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:27.902 11:04:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:27.902 11:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:27.902 11:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:27.902 11:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:27.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:27.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:16:27.902 00:16:27.902 --- 10.0.0.2 ping statistics --- 00:16:27.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.902 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:16:27.902 11:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:27.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:27.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:16:27.902 00:16:27.902 --- 10.0.0.1 ping statistics --- 00:16:27.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.902 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:16:27.902 11:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:27.902 11:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:16:27.902 11:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:27.902 11:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:27.902 11:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:27.902 11:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:27.902 11:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:27.902 11:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:27.902 11:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:27.902 11:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:27.902 11:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:27.902 11:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:27.902 11:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:27.902 11:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1429990 00:16:27.902 11:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1429990 00:16:27.902 11:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:27.902 11:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 1429990 ']' 00:16:27.902 11:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.902 11:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:27.902 11:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.902 11:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:27.902 11:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:27.902 [2024-07-26 11:04:47.177552] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:27.902 [2024-07-26 11:04:47.177598] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:27.902 EAL: No free 2048 kB hugepages reported on node 1 00:16:27.902 [2024-07-26 11:04:47.233650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:27.902 [2024-07-26 11:04:47.314342] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:27.902 [2024-07-26 11:04:47.314380] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:27.902 [2024-07-26 11:04:47.314388] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:27.902 [2024-07-26 11:04:47.314395] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:27.902 [2024-07-26 11:04:47.314401] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:27.902 [2024-07-26 11:04:47.314445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:27.902 [2024-07-26 11:04:47.314635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:27.902 [2024-07-26 11:04:47.314698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:27.902 [2024-07-26 11:04:47.314700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.842 11:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:28.842 11:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:16:28.842 11:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:28.842 11:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:28.842 11:04:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:28.842 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:28.842 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:28.842 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.842 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:28.842 [2024-07-26 11:04:48.025540] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:28.842 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.842 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:28.842 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.842 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:28.842 Malloc0 00:16:28.842 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.842 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:28.842 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.842 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:28.842 Malloc1 00:16:28.842 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.842 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:28.842 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.842 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:28.842 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.842 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:28.842 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.842 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:28.842 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.842 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:28.842 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.842 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:28.842 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.842 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:28.842 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.842 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:28.842 [2024-07-26 11:04:48.107301] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:28.842 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.842 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:28.842 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.842 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:28.842 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.842 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:16:28.842 00:16:28.842 Discovery Log Number of Records 2, Generation counter 2 00:16:28.842 =====Discovery Log Entry 0====== 00:16:28.842 trtype: tcp 00:16:28.842 adrfam: ipv4 00:16:28.842 subtype: current discovery subsystem 00:16:28.842 treq: not required 00:16:28.842 portid: 0 00:16:28.842 trsvcid: 4420 00:16:28.842 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:28.842 traddr: 10.0.0.2 00:16:28.842 eflags: explicit discovery connections, duplicate discovery information 00:16:28.842 sectype: none 00:16:28.842 =====Discovery Log Entry 1====== 00:16:28.842 trtype: tcp 00:16:28.842 adrfam: ipv4 00:16:28.842 subtype: nvme subsystem 00:16:28.842 treq: not required 00:16:28.842 portid: 0 00:16:28.842 trsvcid: 4420 00:16:28.842 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:28.842 traddr: 10.0.0.2 00:16:28.842 eflags: none 00:16:28.842 sectype: none 00:16:28.842 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:28.842 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:28.842 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:28.842 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:28.842 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:28.842 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:28.842 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:28.842 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:28.842 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:28.842 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:28.842 11:04:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:30.224 11:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:30.224 11:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:16:30.224 11:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:30.224 11:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:16:30.224 11:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:16:30.224 11:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:16:32.133 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:16:32.134 /dev/nvme0n1 ]] 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:32.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:32.134 rmmod nvme_tcp 00:16:32.134 rmmod nvme_fabrics 00:16:32.134 rmmod nvme_keyring 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1429990 ']' 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1429990 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 1429990 ']' 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 1429990 00:16:32.134 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:16:32.395 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:32.395 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1429990 00:16:32.395 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:32.395 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:32.395 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1429990' 00:16:32.395 killing process with pid 1429990 00:16:32.395 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 1429990 00:16:32.395 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 1429990 00:16:32.718 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:32.718 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:32.718 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:32.718 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:32.718 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:32.718 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:32.718 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:32.718 11:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.628 11:04:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:34.628 00:16:34.628 real 0m12.429s 00:16:34.628 user 0m19.748s 00:16:34.628 sys 0m4.634s 00:16:34.628 11:04:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:34.628 11:04:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:34.628 ************************************ 00:16:34.628 END TEST nvmf_nvme_cli 00:16:34.628 ************************************ 00:16:34.628 11:04:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:16:34.628 11:04:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:34.628 11:04:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:34.628 11:04:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:34.628 11:04:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:34.628 ************************************ 00:16:34.628 START TEST nvmf_vfio_user 00:16:34.628 ************************************ 00:16:34.628 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:34.890 * Looking for test storage... 00:16:34.890 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1431279 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1431279' 00:16:34.890 Process pid: 1431279 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1431279 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1431279 ']' 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:34.890 11:04:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:34.890 [2024-07-26 11:04:54.218819] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:34.890 [2024-07-26 11:04:54.218864] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:34.890 EAL: No free 2048 kB hugepages reported on node 1 00:16:34.890 [2024-07-26 11:04:54.271705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:34.890 [2024-07-26 11:04:54.345116] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:34.890 [2024-07-26 11:04:54.345153] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:34.890 [2024-07-26 11:04:54.345160] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:34.890 [2024-07-26 11:04:54.345167] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:34.890 [2024-07-26 11:04:54.345172] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:34.890 [2024-07-26 11:04:54.345233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:34.890 [2024-07-26 11:04:54.345353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:34.890 [2024-07-26 11:04:54.345440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:34.890 [2024-07-26 11:04:54.345442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.831 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:35.831 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:16:35.831 11:04:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:36.771 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:16:36.771 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:36.771 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:36.771 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:36.771 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:36.771 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:37.031 Malloc1 00:16:37.031 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:37.291 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:37.551 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:37.551 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:37.552 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:37.552 11:04:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:37.812 Malloc2 00:16:37.812 11:04:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:38.072 11:04:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:38.072 11:04:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:38.333 11:04:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:16:38.333 11:04:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:16:38.333 11:04:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:38.333 11:04:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:38.333 11:04:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:16:38.333 11:04:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:38.333 [2024-07-26 11:04:57.758822] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:38.333 [2024-07-26 11:04:57.758871] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1431978 ] 00:16:38.333 EAL: No free 2048 kB hugepages reported on node 1 00:16:38.333 [2024-07-26 11:04:57.787237] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:16:38.333 [2024-07-26 11:04:57.796364] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:38.333 [2024-07-26 11:04:57.796383] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f2d07c7b000 00:16:38.333 [2024-07-26 11:04:57.797361] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:38.333 [2024-07-26 11:04:57.798358] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:38.333 [2024-07-26 11:04:57.799359] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:38.333 [2024-07-26 11:04:57.800376] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:38.333 [2024-07-26 11:04:57.801372] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:38.333 [2024-07-26 11:04:57.802375] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:38.333 [2024-07-26 11:04:57.803384] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:38.333 [2024-07-26 11:04:57.804394] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:38.333 [2024-07-26 11:04:57.805401] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:38.333 [2024-07-26 11:04:57.805410] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f2d07c70000 00:16:38.333 [2024-07-26 11:04:57.806352] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:38.333 [2024-07-26 11:04:57.819493] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:16:38.333 [2024-07-26 11:04:57.819517] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:16:38.333 [2024-07-26 11:04:57.824524] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:38.333 [2024-07-26 11:04:57.824563] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:38.333 [2024-07-26 11:04:57.824637] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:16:38.333 [2024-07-26 11:04:57.824652] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:16:38.333 [2024-07-26 11:04:57.824657] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:16:38.333 [2024-07-26 11:04:57.825518] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:16:38.333 [2024-07-26 11:04:57.825529] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:16:38.333 [2024-07-26 11:04:57.825535] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:16:38.334 [2024-07-26 11:04:57.826523] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:38.334 [2024-07-26 11:04:57.826531] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:16:38.334 [2024-07-26 11:04:57.826537] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:16:38.334 [2024-07-26 11:04:57.827530] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:16:38.334 [2024-07-26 11:04:57.827537] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:38.334 [2024-07-26 11:04:57.828540] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:16:38.334 [2024-07-26 11:04:57.828548] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:16:38.334 [2024-07-26 11:04:57.828552] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:16:38.334 [2024-07-26 11:04:57.828558] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:38.334 [2024-07-26 11:04:57.828663] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:16:38.334 [2024-07-26 11:04:57.828667] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:38.334 [2024-07-26 11:04:57.828671] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:16:38.334 [2024-07-26 11:04:57.829544] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:16:38.596 [2024-07-26 11:04:57.830549] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:16:38.596 [2024-07-26 11:04:57.831560] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:38.596 [2024-07-26 11:04:57.832564] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:38.596 [2024-07-26 11:04:57.832639] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:38.596 [2024-07-26 11:04:57.833571] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:16:38.596 [2024-07-26 11:04:57.833579] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:38.596 [2024-07-26 11:04:57.833583] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:16:38.596 [2024-07-26 11:04:57.833600] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:16:38.596 [2024-07-26 11:04:57.833607] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:16:38.596 [2024-07-26 11:04:57.833621] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:38.596 [2024-07-26 11:04:57.833625] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:38.596 [2024-07-26 11:04:57.833629] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:38.596 [2024-07-26 11:04:57.833642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:38.596 [2024-07-26 11:04:57.833684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:38.596 [2024-07-26 11:04:57.833692] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:16:38.596 [2024-07-26 11:04:57.833696] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:16:38.596 [2024-07-26 11:04:57.833700] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:16:38.596 [2024-07-26 11:04:57.833703] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:38.596 [2024-07-26 11:04:57.833708] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:16:38.596 [2024-07-26 11:04:57.833712] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:16:38.596 [2024-07-26 11:04:57.833715] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:16:38.596 [2024-07-26 11:04:57.833722] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:16:38.596 [2024-07-26 11:04:57.833735] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:38.596 [2024-07-26 11:04:57.833749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:38.596 [2024-07-26 11:04:57.833761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:38.596 [2024-07-26 11:04:57.833769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:38.596 [2024-07-26 11:04:57.833776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:38.596 [2024-07-26 11:04:57.833783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:38.596 [2024-07-26 11:04:57.833787] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:38.596 [2024-07-26 11:04:57.833795] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:38.596 [2024-07-26 11:04:57.833803] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:38.596 [2024-07-26 11:04:57.833811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:38.596 [2024-07-26 11:04:57.833815] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:16:38.596 [2024-07-26 11:04:57.833820] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:38.596 [2024-07-26 11:04:57.833827] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:16:38.596 [2024-07-26 11:04:57.833832] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:16:38.596 [2024-07-26 11:04:57.833840] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:38.596 [2024-07-26 11:04:57.833851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:38.596 [2024-07-26 11:04:57.833901] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:16:38.596 [2024-07-26 11:04:57.833908] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:16:38.596 [2024-07-26 11:04:57.833914] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:38.596 [2024-07-26 11:04:57.833918] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:38.596 [2024-07-26 11:04:57.833921] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:38.596 [2024-07-26 11:04:57.833927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:38.596 [2024-07-26 11:04:57.833939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:38.596 [2024-07-26 11:04:57.833948] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:16:38.597 [2024-07-26 11:04:57.833958] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:16:38.597 [2024-07-26 11:04:57.833965] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:16:38.597 [2024-07-26 11:04:57.833971] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:38.597 [2024-07-26 11:04:57.833974] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:38.597 [2024-07-26 11:04:57.833977] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:38.597 [2024-07-26 11:04:57.833983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:38.597 [2024-07-26 11:04:57.834002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:38.597 [2024-07-26 11:04:57.834013] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:38.597 [2024-07-26 11:04:57.834020] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:38.597 [2024-07-26 11:04:57.834026] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:38.597 [2024-07-26 11:04:57.834030] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:38.597 [2024-07-26 11:04:57.834033] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:38.597 [2024-07-26 11:04:57.834038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:38.597 [2024-07-26 11:04:57.834056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:38.597 [2024-07-26 11:04:57.834064] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:38.597 [2024-07-26 11:04:57.834069] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:16:38.597 [2024-07-26 11:04:57.834076] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:16:38.597 [2024-07-26 11:04:57.834085] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:16:38.597 [2024-07-26 11:04:57.834090] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:38.597 [2024-07-26 11:04:57.834094] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:16:38.597 [2024-07-26 11:04:57.834098] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:16:38.597 [2024-07-26 11:04:57.834102] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:16:38.597 [2024-07-26 11:04:57.834107] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:16:38.597 [2024-07-26 11:04:57.834122] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:38.597 [2024-07-26 11:04:57.834134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:38.597 [2024-07-26 11:04:57.834144] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:38.597 [2024-07-26 11:04:57.834158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:38.597 [2024-07-26 11:04:57.834167] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:38.597 [2024-07-26 11:04:57.834175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:38.597 [2024-07-26 11:04:57.834185] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:38.597 [2024-07-26 11:04:57.834197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:38.597 [2024-07-26 11:04:57.834208] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:38.597 [2024-07-26 11:04:57.834212] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:38.597 [2024-07-26 11:04:57.834215] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:38.597 [2024-07-26 11:04:57.834218] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:38.597 [2024-07-26 11:04:57.834221] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:38.597 [2024-07-26 11:04:57.834227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:38.597 [2024-07-26 11:04:57.834233] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:38.597 [2024-07-26 11:04:57.834237] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:38.597 [2024-07-26 11:04:57.834239] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:38.597 [2024-07-26 11:04:57.834245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:38.597 [2024-07-26 11:04:57.834250] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:38.597 [2024-07-26 11:04:57.834254] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:38.597 [2024-07-26 11:04:57.834257] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:38.597 [2024-07-26 11:04:57.834262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:38.597 [2024-07-26 11:04:57.834270] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:38.597 [2024-07-26 11:04:57.834274] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:38.597 [2024-07-26 11:04:57.834277] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:38.597 [2024-07-26 11:04:57.834282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:38.597 [2024-07-26 11:04:57.834288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:38.597 [2024-07-26 11:04:57.834300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:38.597 [2024-07-26 11:04:57.834310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:38.597 [2024-07-26 11:04:57.834316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:38.597 ===================================================== 00:16:38.597 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:38.597 ===================================================== 00:16:38.597 Controller Capabilities/Features 00:16:38.597 ================================ 00:16:38.597 Vendor ID: 4e58 00:16:38.597 Subsystem Vendor ID: 4e58 00:16:38.597 Serial Number: SPDK1 00:16:38.597 Model Number: SPDK bdev Controller 00:16:38.597 Firmware Version: 24.09 00:16:38.597 Recommended Arb Burst: 6 00:16:38.597 IEEE OUI Identifier: 8d 6b 50 00:16:38.597 Multi-path I/O 00:16:38.597 May have multiple subsystem ports: Yes 00:16:38.597 May have multiple controllers: Yes 00:16:38.597 Associated with SR-IOV VF: No 00:16:38.597 Max Data Transfer Size: 131072 00:16:38.597 Max Number of Namespaces: 32 00:16:38.597 Max Number of I/O Queues: 127 00:16:38.597 NVMe Specification Version (VS): 1.3 00:16:38.597 NVMe Specification Version (Identify): 1.3 00:16:38.597 Maximum Queue Entries: 256 00:16:38.597 Contiguous Queues Required: Yes 00:16:38.597 Arbitration Mechanisms Supported 00:16:38.597 Weighted Round Robin: Not Supported 00:16:38.597 Vendor Specific: Not Supported 00:16:38.597 Reset Timeout: 15000 ms 00:16:38.597 Doorbell Stride: 4 bytes 00:16:38.597 NVM Subsystem Reset: Not Supported 00:16:38.597 Command Sets Supported 00:16:38.597 NVM Command Set: Supported 00:16:38.597 Boot Partition: Not Supported 00:16:38.597 Memory Page Size Minimum: 4096 bytes 00:16:38.597 Memory Page Size Maximum: 4096 bytes 00:16:38.597 Persistent Memory Region: Not Supported 00:16:38.597 Optional Asynchronous Events Supported 00:16:38.597 Namespace Attribute Notices: Supported 00:16:38.597 Firmware Activation Notices: Not Supported 00:16:38.597 ANA Change Notices: Not Supported 00:16:38.597 PLE Aggregate Log Change Notices: Not Supported 00:16:38.597 LBA Status Info Alert Notices: Not Supported 00:16:38.597 EGE Aggregate Log Change Notices: Not Supported 00:16:38.597 Normal NVM Subsystem Shutdown event: Not Supported 00:16:38.597 Zone Descriptor Change Notices: Not Supported 00:16:38.597 Discovery Log Change Notices: Not Supported 00:16:38.597 Controller Attributes 00:16:38.597 128-bit Host Identifier: Supported 00:16:38.597 Non-Operational Permissive Mode: Not Supported 00:16:38.597 NVM Sets: Not Supported 00:16:38.597 Read Recovery Levels: Not Supported 00:16:38.597 Endurance Groups: Not Supported 00:16:38.597 Predictable Latency Mode: Not Supported 00:16:38.597 Traffic Based Keep ALive: Not Supported 00:16:38.597 Namespace Granularity: Not Supported 00:16:38.597 SQ Associations: Not Supported 00:16:38.597 UUID List: Not Supported 00:16:38.597 Multi-Domain Subsystem: Not Supported 00:16:38.597 Fixed Capacity Management: Not Supported 00:16:38.597 Variable Capacity Management: Not Supported 00:16:38.597 Delete Endurance Group: Not Supported 00:16:38.597 Delete NVM Set: Not Supported 00:16:38.597 Extended LBA Formats Supported: Not Supported 00:16:38.597 Flexible Data Placement Supported: Not Supported 00:16:38.597 00:16:38.597 Controller Memory Buffer Support 00:16:38.598 ================================ 00:16:38.598 Supported: No 00:16:38.598 00:16:38.598 Persistent Memory Region Support 00:16:38.598 ================================ 00:16:38.598 Supported: No 00:16:38.598 00:16:38.598 Admin Command Set Attributes 00:16:38.598 ============================ 00:16:38.598 Security Send/Receive: Not Supported 00:16:38.598 Format NVM: Not Supported 00:16:38.598 Firmware Activate/Download: Not Supported 00:16:38.598 Namespace Management: Not Supported 00:16:38.598 Device Self-Test: Not Supported 00:16:38.598 Directives: Not Supported 00:16:38.598 NVMe-MI: Not Supported 00:16:38.598 Virtualization Management: Not Supported 00:16:38.598 Doorbell Buffer Config: Not Supported 00:16:38.598 Get LBA Status Capability: Not Supported 00:16:38.598 Command & Feature Lockdown Capability: Not Supported 00:16:38.598 Abort Command Limit: 4 00:16:38.598 Async Event Request Limit: 4 00:16:38.598 Number of Firmware Slots: N/A 00:16:38.598 Firmware Slot 1 Read-Only: N/A 00:16:38.598 Firmware Activation Without Reset: N/A 00:16:38.598 Multiple Update Detection Support: N/A 00:16:38.598 Firmware Update Granularity: No Information Provided 00:16:38.598 Per-Namespace SMART Log: No 00:16:38.598 Asymmetric Namespace Access Log Page: Not Supported 00:16:38.598 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:16:38.598 Command Effects Log Page: Supported 00:16:38.598 Get Log Page Extended Data: Supported 00:16:38.598 Telemetry Log Pages: Not Supported 00:16:38.598 Persistent Event Log Pages: Not Supported 00:16:38.598 Supported Log Pages Log Page: May Support 00:16:38.598 Commands Supported & Effects Log Page: Not Supported 00:16:38.598 Feature Identifiers & Effects Log Page:May Support 00:16:38.598 NVMe-MI Commands & Effects Log Page: May Support 00:16:38.598 Data Area 4 for Telemetry Log: Not Supported 00:16:38.598 Error Log Page Entries Supported: 128 00:16:38.598 Keep Alive: Supported 00:16:38.598 Keep Alive Granularity: 10000 ms 00:16:38.598 00:16:38.598 NVM Command Set Attributes 00:16:38.598 ========================== 00:16:38.598 Submission Queue Entry Size 00:16:38.598 Max: 64 00:16:38.598 Min: 64 00:16:38.598 Completion Queue Entry Size 00:16:38.598 Max: 16 00:16:38.598 Min: 16 00:16:38.598 Number of Namespaces: 32 00:16:38.598 Compare Command: Supported 00:16:38.598 Write Uncorrectable Command: Not Supported 00:16:38.598 Dataset Management Command: Supported 00:16:38.598 Write Zeroes Command: Supported 00:16:38.598 Set Features Save Field: Not Supported 00:16:38.598 Reservations: Not Supported 00:16:38.598 Timestamp: Not Supported 00:16:38.598 Copy: Supported 00:16:38.598 Volatile Write Cache: Present 00:16:38.598 Atomic Write Unit (Normal): 1 00:16:38.598 Atomic Write Unit (PFail): 1 00:16:38.598 Atomic Compare & Write Unit: 1 00:16:38.598 Fused Compare & Write: Supported 00:16:38.598 Scatter-Gather List 00:16:38.598 SGL Command Set: Supported (Dword aligned) 00:16:38.598 SGL Keyed: Not Supported 00:16:38.598 SGL Bit Bucket Descriptor: Not Supported 00:16:38.598 SGL Metadata Pointer: Not Supported 00:16:38.598 Oversized SGL: Not Supported 00:16:38.598 SGL Metadata Address: Not Supported 00:16:38.598 SGL Offset: Not Supported 00:16:38.598 Transport SGL Data Block: Not Supported 00:16:38.598 Replay Protected Memory Block: Not Supported 00:16:38.598 00:16:38.598 Firmware Slot Information 00:16:38.598 ========================= 00:16:38.598 Active slot: 1 00:16:38.598 Slot 1 Firmware Revision: 24.09 00:16:38.598 00:16:38.598 00:16:38.598 Commands Supported and Effects 00:16:38.598 ============================== 00:16:38.598 Admin Commands 00:16:38.598 -------------- 00:16:38.598 Get Log Page (02h): Supported 00:16:38.598 Identify (06h): Supported 00:16:38.598 Abort (08h): Supported 00:16:38.598 Set Features (09h): Supported 00:16:38.598 Get Features (0Ah): Supported 00:16:38.598 Asynchronous Event Request (0Ch): Supported 00:16:38.598 Keep Alive (18h): Supported 00:16:38.598 I/O Commands 00:16:38.598 ------------ 00:16:38.598 Flush (00h): Supported LBA-Change 00:16:38.598 Write (01h): Supported LBA-Change 00:16:38.598 Read (02h): Supported 00:16:38.598 Compare (05h): Supported 00:16:38.598 Write Zeroes (08h): Supported LBA-Change 00:16:38.598 Dataset Management (09h): Supported LBA-Change 00:16:38.598 Copy (19h): Supported LBA-Change 00:16:38.598 00:16:38.598 Error Log 00:16:38.598 ========= 00:16:38.598 00:16:38.598 Arbitration 00:16:38.598 =========== 00:16:38.598 Arbitration Burst: 1 00:16:38.598 00:16:38.598 Power Management 00:16:38.598 ================ 00:16:38.598 Number of Power States: 1 00:16:38.598 Current Power State: Power State #0 00:16:38.598 Power State #0: 00:16:38.598 Max Power: 0.00 W 00:16:38.598 Non-Operational State: Operational 00:16:38.598 Entry Latency: Not Reported 00:16:38.598 Exit Latency: Not Reported 00:16:38.598 Relative Read Throughput: 0 00:16:38.598 Relative Read Latency: 0 00:16:38.598 Relative Write Throughput: 0 00:16:38.598 Relative Write Latency: 0 00:16:38.598 Idle Power: Not Reported 00:16:38.598 Active Power: Not Reported 00:16:38.598 Non-Operational Permissive Mode: Not Supported 00:16:38.598 00:16:38.598 Health Information 00:16:38.598 ================== 00:16:38.598 Critical Warnings: 00:16:38.598 Available Spare Space: OK 00:16:38.598 Temperature: OK 00:16:38.598 Device Reliability: OK 00:16:38.598 Read Only: No 00:16:38.598 Volatile Memory Backup: OK 00:16:38.598 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:38.598 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:38.598 Available Spare: 0% 00:16:38.598 Available Sp[2024-07-26 11:04:57.834400] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:38.598 [2024-07-26 11:04:57.834411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:38.598 [2024-07-26 11:04:57.834433] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:16:38.598 [2024-07-26 11:04:57.834440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.598 [2024-07-26 11:04:57.834446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.598 [2024-07-26 11:04:57.834451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.598 [2024-07-26 11:04:57.834456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:38.598 [2024-07-26 11:04:57.834582] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:38.598 [2024-07-26 11:04:57.834591] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:16:38.598 [2024-07-26 11:04:57.835586] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:38.598 [2024-07-26 11:04:57.835632] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:16:38.598 [2024-07-26 11:04:57.835638] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:16:38.598 [2024-07-26 11:04:57.836594] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:16:38.598 [2024-07-26 11:04:57.836604] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:16:38.598 [2024-07-26 11:04:57.836651] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:16:38.598 [2024-07-26 11:04:57.842048] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:38.598 are Threshold: 0% 00:16:38.598 Life Percentage Used: 0% 00:16:38.598 Data Units Read: 0 00:16:38.598 Data Units Written: 0 00:16:38.598 Host Read Commands: 0 00:16:38.598 Host Write Commands: 0 00:16:38.598 Controller Busy Time: 0 minutes 00:16:38.598 Power Cycles: 0 00:16:38.598 Power On Hours: 0 hours 00:16:38.598 Unsafe Shutdowns: 0 00:16:38.598 Unrecoverable Media Errors: 0 00:16:38.598 Lifetime Error Log Entries: 0 00:16:38.598 Warning Temperature Time: 0 minutes 00:16:38.598 Critical Temperature Time: 0 minutes 00:16:38.598 00:16:38.598 Number of Queues 00:16:38.598 ================ 00:16:38.598 Number of I/O Submission Queues: 127 00:16:38.598 Number of I/O Completion Queues: 127 00:16:38.598 00:16:38.598 Active Namespaces 00:16:38.598 ================= 00:16:38.598 Namespace ID:1 00:16:38.598 Error Recovery Timeout: Unlimited 00:16:38.598 Command Set Identifier: NVM (00h) 00:16:38.598 Deallocate: Supported 00:16:38.598 Deallocated/Unwritten Error: Not Supported 00:16:38.599 Deallocated Read Value: Unknown 00:16:38.599 Deallocate in Write Zeroes: Not Supported 00:16:38.599 Deallocated Guard Field: 0xFFFF 00:16:38.599 Flush: Supported 00:16:38.599 Reservation: Supported 00:16:38.599 Namespace Sharing Capabilities: Multiple Controllers 00:16:38.599 Size (in LBAs): 131072 (0GiB) 00:16:38.599 Capacity (in LBAs): 131072 (0GiB) 00:16:38.599 Utilization (in LBAs): 131072 (0GiB) 00:16:38.599 NGUID: 5EC1CDB78DE84AA8854E05BDA883A684 00:16:38.599 UUID: 5ec1cdb7-8de8-4aa8-854e-05bda883a684 00:16:38.599 Thin Provisioning: Not Supported 00:16:38.599 Per-NS Atomic Units: Yes 00:16:38.599 Atomic Boundary Size (Normal): 0 00:16:38.599 Atomic Boundary Size (PFail): 0 00:16:38.599 Atomic Boundary Offset: 0 00:16:38.599 Maximum Single Source Range Length: 65535 00:16:38.599 Maximum Copy Length: 65535 00:16:38.599 Maximum Source Range Count: 1 00:16:38.599 NGUID/EUI64 Never Reused: No 00:16:38.599 Namespace Write Protected: No 00:16:38.599 Number of LBA Formats: 1 00:16:38.599 Current LBA Format: LBA Format #00 00:16:38.599 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:38.599 00:16:38.599 11:04:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:38.599 EAL: No free 2048 kB hugepages reported on node 1 00:16:38.599 [2024-07-26 11:04:58.054823] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:43.882 Initializing NVMe Controllers 00:16:43.882 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:43.882 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:43.882 Initialization complete. Launching workers. 00:16:43.882 ======================================================== 00:16:43.882 Latency(us) 00:16:43.882 Device Information : IOPS MiB/s Average min max 00:16:43.882 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39862.80 155.71 3211.18 975.69 10581.69 00:16:43.882 ======================================================== 00:16:43.882 Total : 39862.80 155.71 3211.18 975.69 10581.69 00:16:43.882 00:16:43.882 [2024-07-26 11:05:03.076283] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:43.882 11:05:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:43.882 EAL: No free 2048 kB hugepages reported on node 1 00:16:43.882 [2024-07-26 11:05:03.298323] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:49.163 Initializing NVMe Controllers 00:16:49.163 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:49.163 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:49.163 Initialization complete. Launching workers. 00:16:49.163 ======================================================== 00:16:49.163 Latency(us) 00:16:49.163 Device Information : IOPS MiB/s Average min max 00:16:49.163 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16042.48 62.67 7978.13 4985.74 9978.94 00:16:49.164 ======================================================== 00:16:49.164 Total : 16042.48 62.67 7978.13 4985.74 9978.94 00:16:49.164 00:16:49.164 [2024-07-26 11:05:08.332109] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:49.164 11:05:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:49.164 EAL: No free 2048 kB hugepages reported on node 1 00:16:49.164 [2024-07-26 11:05:08.531127] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:54.443 [2024-07-26 11:05:13.605382] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:54.443 Initializing NVMe Controllers 00:16:54.443 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:54.443 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:54.443 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:16:54.443 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:16:54.443 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:16:54.443 Initialization complete. Launching workers. 00:16:54.443 Starting thread on core 2 00:16:54.443 Starting thread on core 3 00:16:54.443 Starting thread on core 1 00:16:54.443 11:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:16:54.443 EAL: No free 2048 kB hugepages reported on node 1 00:16:54.443 [2024-07-26 11:05:13.886419] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:57.737 [2024-07-26 11:05:16.954891] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:57.737 Initializing NVMe Controllers 00:16:57.737 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:57.737 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:57.737 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:16:57.737 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:16:57.737 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:16:57.737 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:16:57.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:57.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:57.738 Initialization complete. Launching workers. 00:16:57.738 Starting thread on core 1 with urgent priority queue 00:16:57.738 Starting thread on core 2 with urgent priority queue 00:16:57.738 Starting thread on core 3 with urgent priority queue 00:16:57.738 Starting thread on core 0 with urgent priority queue 00:16:57.738 SPDK bdev Controller (SPDK1 ) core 0: 8323.33 IO/s 12.01 secs/100000 ios 00:16:57.738 SPDK bdev Controller (SPDK1 ) core 1: 9185.33 IO/s 10.89 secs/100000 ios 00:16:57.738 SPDK bdev Controller (SPDK1 ) core 2: 8420.67 IO/s 11.88 secs/100000 ios 00:16:57.738 SPDK bdev Controller (SPDK1 ) core 3: 7935.00 IO/s 12.60 secs/100000 ios 00:16:57.738 ======================================================== 00:16:57.738 00:16:57.738 11:05:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:57.738 EAL: No free 2048 kB hugepages reported on node 1 00:16:57.738 [2024-07-26 11:05:17.227553] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:57.998 Initializing NVMe Controllers 00:16:57.998 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:57.998 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:57.998 Namespace ID: 1 size: 0GB 00:16:57.998 Initialization complete. 00:16:57.998 INFO: using host memory buffer for IO 00:16:57.998 Hello world! 00:16:57.998 [2024-07-26 11:05:17.260771] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:57.998 11:05:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:57.998 EAL: No free 2048 kB hugepages reported on node 1 00:16:58.258 [2024-07-26 11:05:17.521506] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:59.199 Initializing NVMe Controllers 00:16:59.199 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:59.199 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:59.199 Initialization complete. Launching workers. 00:16:59.199 submit (in ns) avg, min, max = 8126.3, 3238.3, 4173221.7 00:16:59.199 complete (in ns) avg, min, max = 22127.6, 1801.7, 4000897.4 00:16:59.199 00:16:59.199 Submit histogram 00:16:59.199 ================ 00:16:59.199 Range in us Cumulative Count 00:16:59.199 3.228 - 3.242: 0.0061% ( 1) 00:16:59.199 3.242 - 3.256: 0.0367% ( 5) 00:16:59.199 3.256 - 3.270: 0.0489% ( 2) 00:16:59.199 3.270 - 3.283: 0.0795% ( 5) 00:16:59.199 3.283 - 3.297: 0.3549% ( 45) 00:16:59.199 3.297 - 3.311: 1.7682% ( 231) 00:16:59.199 3.311 - 3.325: 5.1701% ( 556) 00:16:59.199 3.325 - 3.339: 10.0649% ( 800) 00:16:59.199 3.339 - 3.353: 15.4797% ( 885) 00:16:59.199 3.353 - 3.367: 21.6899% ( 1015) 00:16:59.199 3.367 - 3.381: 27.2699% ( 912) 00:16:59.199 3.381 - 3.395: 32.3605% ( 832) 00:16:59.199 3.395 - 3.409: 37.8916% ( 904) 00:16:59.199 3.409 - 3.423: 42.5844% ( 767) 00:16:59.199 3.423 - 3.437: 47.0509% ( 730) 00:16:59.199 3.437 - 3.450: 51.5113% ( 729) 00:16:59.199 3.450 - 3.464: 57.4951% ( 978) 00:16:59.199 3.464 - 3.478: 63.6258% ( 1002) 00:16:59.199 3.478 - 3.492: 68.2513% ( 756) 00:16:59.199 3.492 - 3.506: 73.9109% ( 925) 00:16:59.199 3.506 - 3.520: 78.8607% ( 809) 00:16:59.199 3.520 - 3.534: 81.8955% ( 496) 00:16:59.199 3.534 - 3.548: 84.2695% ( 388) 00:16:59.199 3.548 - 3.562: 85.7073% ( 235) 00:16:59.199 3.562 - 3.590: 87.0962% ( 227) 00:16:59.199 3.590 - 3.617: 88.0996% ( 164) 00:16:59.199 3.617 - 3.645: 89.6109% ( 247) 00:16:59.199 3.645 - 3.673: 91.4831% ( 306) 00:16:59.199 3.673 - 3.701: 93.2513% ( 289) 00:16:59.199 3.701 - 3.729: 95.2398% ( 325) 00:16:59.199 3.729 - 3.757: 96.7878% ( 253) 00:16:59.199 3.757 - 3.784: 97.9564% ( 191) 00:16:59.199 3.784 - 3.812: 98.7396% ( 128) 00:16:59.199 3.812 - 3.840: 99.2230% ( 79) 00:16:59.199 3.840 - 3.868: 99.4493% ( 37) 00:16:59.199 3.868 - 3.896: 99.5656% ( 19) 00:16:59.199 3.896 - 3.923: 99.6023% ( 6) 00:16:59.199 3.923 - 3.951: 99.6084% ( 1) 00:16:59.199 4.090 - 4.118: 99.6145% ( 1) 00:16:59.199 4.230 - 4.257: 99.6207% ( 1) 00:16:59.199 5.287 - 5.315: 99.6268% ( 1) 00:16:59.199 5.343 - 5.370: 99.6329% ( 1) 00:16:59.199 5.510 - 5.537: 99.6390% ( 1) 00:16:59.199 5.843 - 5.871: 99.6451% ( 1) 00:16:59.199 5.871 - 5.899: 99.6512% ( 1) 00:16:59.199 5.899 - 5.927: 99.6574% ( 1) 00:16:59.199 5.983 - 6.010: 99.6635% ( 1) 00:16:59.199 6.038 - 6.066: 99.6696% ( 1) 00:16:59.199 6.122 - 6.150: 99.6757% ( 1) 00:16:59.199 6.205 - 6.233: 99.6818% ( 1) 00:16:59.199 6.428 - 6.456: 99.6941% ( 2) 00:16:59.199 6.456 - 6.483: 99.7002% ( 1) 00:16:59.199 6.539 - 6.567: 99.7063% ( 1) 00:16:59.199 6.678 - 6.706: 99.7186% ( 2) 00:16:59.199 6.706 - 6.734: 99.7247% ( 1) 00:16:59.199 6.762 - 6.790: 99.7308% ( 1) 00:16:59.199 6.790 - 6.817: 99.7430% ( 2) 00:16:59.199 6.873 - 6.901: 99.7491% ( 1) 00:16:59.199 6.929 - 6.957: 99.7553% ( 1) 00:16:59.199 7.068 - 7.096: 99.7614% ( 1) 00:16:59.199 7.096 - 7.123: 99.7736% ( 2) 00:16:59.199 7.123 - 7.179: 99.7797% ( 1) 00:16:59.199 7.513 - 7.569: 99.7859% ( 1) 00:16:59.199 7.569 - 7.624: 99.7920% ( 1) 00:16:59.199 7.624 - 7.680: 99.7981% ( 1) 00:16:59.199 7.736 - 7.791: 99.8042% ( 1) 00:16:59.199 7.847 - 7.903: 99.8164% ( 2) 00:16:59.200 8.014 - 8.070: 99.8226% ( 1) 00:16:59.200 8.125 - 8.181: 99.8348% ( 2) 00:16:59.200 8.181 - 8.237: 99.8409% ( 1) 00:16:59.200 8.403 - 8.459: 99.8532% ( 2) 00:16:59.200 8.626 - 8.682: 99.8593% ( 1) 00:16:59.200 8.793 - 8.849: 99.8654% ( 1) 00:16:59.200 8.904 - 8.960: 99.8715% ( 1) 00:16:59.200 9.127 - 9.183: 99.8776% ( 1) 00:16:59.200 10.240 - 10.296: 99.8837% ( 1) 00:16:59.200 3989.148 - 4017.642: 99.9939% ( 18) 00:16:59.200 4160.111 - 4188.605: 100.0000% ( 1) 00:16:59.200 00:16:59.200 Complete histogram 00:16:59.200 ================== 00:16:59.200 Ra[2024-07-26 11:05:18.543618] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:59.200 nge in us Cumulative Count 00:16:59.200 1.795 - 1.809: 0.0061% ( 1) 00:16:59.200 1.809 - 1.823: 0.0857% ( 13) 00:16:59.200 1.823 - 1.837: 0.1101% ( 4) 00:16:59.200 1.837 - 1.850: 0.1897% ( 13) 00:16:59.200 1.850 - 1.864: 7.7215% ( 1231) 00:16:59.200 1.864 - 1.878: 52.8634% ( 7378) 00:16:59.200 1.878 - 1.892: 84.8813% ( 5233) 00:16:59.200 1.892 - 1.906: 92.1072% ( 1181) 00:16:59.200 1.906 - 1.920: 95.0869% ( 487) 00:16:59.200 1.920 - 1.934: 96.1209% ( 169) 00:16:59.200 1.934 - 1.948: 97.3752% ( 205) 00:16:59.200 1.948 - 1.962: 98.5744% ( 196) 00:16:59.200 1.962 - 1.976: 99.0088% ( 71) 00:16:59.200 1.976 - 1.990: 99.1067% ( 16) 00:16:59.200 1.990 - 2.003: 99.1495% ( 7) 00:16:59.200 2.003 - 2.017: 99.1801% ( 5) 00:16:59.200 2.017 - 2.031: 99.1862% ( 1) 00:16:59.200 2.031 - 2.045: 99.1985% ( 2) 00:16:59.200 2.045 - 2.059: 99.2046% ( 1) 00:16:59.200 2.059 - 2.073: 99.2168% ( 2) 00:16:59.200 2.073 - 2.087: 99.2291% ( 2) 00:16:59.200 2.087 - 2.101: 99.2413% ( 2) 00:16:59.200 2.115 - 2.129: 99.2535% ( 2) 00:16:59.200 2.129 - 2.143: 99.2658% ( 2) 00:16:59.200 2.337 - 2.351: 99.2719% ( 1) 00:16:59.200 2.351 - 2.365: 99.2780% ( 1) 00:16:59.200 2.379 - 2.393: 99.2841% ( 1) 00:16:59.200 3.367 - 3.381: 99.2903% ( 1) 00:16:59.200 3.868 - 3.896: 99.2964% ( 1) 00:16:59.200 3.951 - 3.979: 99.3025% ( 1) 00:16:59.200 4.202 - 4.230: 99.3086% ( 1) 00:16:59.200 4.369 - 4.397: 99.3147% ( 1) 00:16:59.200 4.424 - 4.452: 99.3209% ( 1) 00:16:59.200 4.480 - 4.508: 99.3270% ( 1) 00:16:59.200 4.647 - 4.675: 99.3331% ( 1) 00:16:59.200 4.675 - 4.703: 99.3392% ( 1) 00:16:59.200 4.703 - 4.730: 99.3453% ( 1) 00:16:59.200 5.092 - 5.120: 99.3514% ( 1) 00:16:59.200 5.176 - 5.203: 99.3576% ( 1) 00:16:59.200 5.287 - 5.315: 99.3637% ( 1) 00:16:59.200 5.343 - 5.370: 99.3698% ( 1) 00:16:59.200 5.510 - 5.537: 99.3820% ( 2) 00:16:59.200 5.649 - 5.677: 99.3882% ( 1) 00:16:59.200 5.871 - 5.899: 99.3943% ( 1) 00:16:59.200 6.122 - 6.150: 99.4004% ( 1) 00:16:59.200 6.177 - 6.205: 99.4065% ( 1) 00:16:59.200 6.289 - 6.317: 99.4187% ( 2) 00:16:59.200 6.317 - 6.344: 99.4310% ( 2) 00:16:59.200 6.344 - 6.372: 99.4371% ( 1) 00:16:59.200 6.428 - 6.456: 99.4432% ( 1) 00:16:59.200 6.595 - 6.623: 99.4493% ( 1) 00:16:59.200 6.650 - 6.678: 99.4555% ( 1) 00:16:59.200 6.790 - 6.817: 99.4616% ( 1) 00:16:59.200 7.068 - 7.096: 99.4677% ( 1) 00:16:59.200 8.348 - 8.403: 99.4738% ( 1) 00:16:59.200 8.403 - 8.459: 99.4799% ( 1) 00:16:59.200 8.904 - 8.960: 99.4860% ( 1) 00:16:59.200 9.350 - 9.405: 99.4922% ( 1) 00:16:59.200 3148.577 - 3162.824: 99.4983% ( 1) 00:16:59.200 3989.148 - 4017.642: 100.0000% ( 82) 00:16:59.200 00:16:59.200 11:05:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:59.200 11:05:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:59.200 11:05:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:59.200 11:05:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:59.200 11:05:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:59.461 [ 00:16:59.461 { 00:16:59.461 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:59.461 "subtype": "Discovery", 00:16:59.461 "listen_addresses": [], 00:16:59.461 "allow_any_host": true, 00:16:59.461 "hosts": [] 00:16:59.461 }, 00:16:59.461 { 00:16:59.461 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:59.461 "subtype": "NVMe", 00:16:59.461 "listen_addresses": [ 00:16:59.461 { 00:16:59.461 "trtype": "VFIOUSER", 00:16:59.461 "adrfam": "IPv4", 00:16:59.461 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:59.461 "trsvcid": "0" 00:16:59.461 } 00:16:59.461 ], 00:16:59.461 "allow_any_host": true, 00:16:59.461 "hosts": [], 00:16:59.461 "serial_number": "SPDK1", 00:16:59.461 "model_number": "SPDK bdev Controller", 00:16:59.461 "max_namespaces": 32, 00:16:59.461 "min_cntlid": 1, 00:16:59.461 "max_cntlid": 65519, 00:16:59.461 "namespaces": [ 00:16:59.461 { 00:16:59.461 "nsid": 1, 00:16:59.461 "bdev_name": "Malloc1", 00:16:59.461 "name": "Malloc1", 00:16:59.461 "nguid": "5EC1CDB78DE84AA8854E05BDA883A684", 00:16:59.461 "uuid": "5ec1cdb7-8de8-4aa8-854e-05bda883a684" 00:16:59.461 } 00:16:59.461 ] 00:16:59.461 }, 00:16:59.461 { 00:16:59.461 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:59.461 "subtype": "NVMe", 00:16:59.461 "listen_addresses": [ 00:16:59.461 { 00:16:59.461 "trtype": "VFIOUSER", 00:16:59.461 "adrfam": "IPv4", 00:16:59.461 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:59.461 "trsvcid": "0" 00:16:59.461 } 00:16:59.461 ], 00:16:59.461 "allow_any_host": true, 00:16:59.461 "hosts": [], 00:16:59.461 "serial_number": "SPDK2", 00:16:59.461 "model_number": "SPDK bdev Controller", 00:16:59.461 "max_namespaces": 32, 00:16:59.461 "min_cntlid": 1, 00:16:59.461 "max_cntlid": 65519, 00:16:59.461 "namespaces": [ 00:16:59.461 { 00:16:59.461 "nsid": 1, 00:16:59.461 "bdev_name": "Malloc2", 00:16:59.461 "name": "Malloc2", 00:16:59.461 "nguid": "4D1E4A31A93E48A1939AC768353EF8BD", 00:16:59.461 "uuid": "4d1e4a31-a93e-48a1-939a-c768353ef8bd" 00:16:59.461 } 00:16:59.461 ] 00:16:59.461 } 00:16:59.461 ] 00:16:59.461 11:05:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:59.461 11:05:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1435430 00:16:59.461 11:05:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:59.461 11:05:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:59.461 11:05:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:16:59.461 11:05:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:59.461 11:05:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:59.461 11:05:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:16:59.461 11:05:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:59.461 11:05:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:59.461 EAL: No free 2048 kB hugepages reported on node 1 00:16:59.461 [2024-07-26 11:05:18.908785] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:59.461 Malloc3 00:16:59.721 11:05:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:59.721 [2024-07-26 11:05:19.126428] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:59.721 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:59.721 Asynchronous Event Request test 00:16:59.721 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:59.721 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:59.721 Registering asynchronous event callbacks... 00:16:59.721 Starting namespace attribute notice tests for all controllers... 00:16:59.721 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:59.721 aer_cb - Changed Namespace 00:16:59.721 Cleaning up... 00:16:59.982 [ 00:16:59.982 { 00:16:59.982 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:59.982 "subtype": "Discovery", 00:16:59.983 "listen_addresses": [], 00:16:59.983 "allow_any_host": true, 00:16:59.983 "hosts": [] 00:16:59.983 }, 00:16:59.983 { 00:16:59.983 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:59.983 "subtype": "NVMe", 00:16:59.983 "listen_addresses": [ 00:16:59.983 { 00:16:59.983 "trtype": "VFIOUSER", 00:16:59.983 "adrfam": "IPv4", 00:16:59.983 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:59.983 "trsvcid": "0" 00:16:59.983 } 00:16:59.983 ], 00:16:59.983 "allow_any_host": true, 00:16:59.983 "hosts": [], 00:16:59.983 "serial_number": "SPDK1", 00:16:59.983 "model_number": "SPDK bdev Controller", 00:16:59.983 "max_namespaces": 32, 00:16:59.983 "min_cntlid": 1, 00:16:59.983 "max_cntlid": 65519, 00:16:59.983 "namespaces": [ 00:16:59.983 { 00:16:59.983 "nsid": 1, 00:16:59.983 "bdev_name": "Malloc1", 00:16:59.983 "name": "Malloc1", 00:16:59.983 "nguid": "5EC1CDB78DE84AA8854E05BDA883A684", 00:16:59.983 "uuid": "5ec1cdb7-8de8-4aa8-854e-05bda883a684" 00:16:59.983 }, 00:16:59.983 { 00:16:59.983 "nsid": 2, 00:16:59.983 "bdev_name": "Malloc3", 00:16:59.983 "name": "Malloc3", 00:16:59.983 "nguid": "65613E3DD6E1469D97E5B191FFEB26DA", 00:16:59.983 "uuid": "65613e3d-d6e1-469d-97e5-b191ffeb26da" 00:16:59.983 } 00:16:59.983 ] 00:16:59.983 }, 00:16:59.983 { 00:16:59.983 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:59.983 "subtype": "NVMe", 00:16:59.983 "listen_addresses": [ 00:16:59.983 { 00:16:59.983 "trtype": "VFIOUSER", 00:16:59.983 "adrfam": "IPv4", 00:16:59.983 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:59.983 "trsvcid": "0" 00:16:59.983 } 00:16:59.983 ], 00:16:59.983 "allow_any_host": true, 00:16:59.983 "hosts": [], 00:16:59.983 "serial_number": "SPDK2", 00:16:59.983 "model_number": "SPDK bdev Controller", 00:16:59.983 "max_namespaces": 32, 00:16:59.983 "min_cntlid": 1, 00:16:59.983 "max_cntlid": 65519, 00:16:59.983 "namespaces": [ 00:16:59.983 { 00:16:59.983 "nsid": 1, 00:16:59.983 "bdev_name": "Malloc2", 00:16:59.983 "name": "Malloc2", 00:16:59.983 "nguid": "4D1E4A31A93E48A1939AC768353EF8BD", 00:16:59.983 "uuid": "4d1e4a31-a93e-48a1-939a-c768353ef8bd" 00:16:59.983 } 00:16:59.983 ] 00:16:59.983 } 00:16:59.983 ] 00:16:59.983 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1435430 00:16:59.983 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:59.983 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:59.983 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:59.983 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:59.983 [2024-07-26 11:05:19.355824] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:59.983 [2024-07-26 11:05:19.355870] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1435437 ] 00:16:59.983 EAL: No free 2048 kB hugepages reported on node 1 00:16:59.983 [2024-07-26 11:05:19.383443] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:59.983 [2024-07-26 11:05:19.391253] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:59.983 [2024-07-26 11:05:19.391272] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f0880241000 00:16:59.983 [2024-07-26 11:05:19.392262] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:59.983 [2024-07-26 11:05:19.393265] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:59.983 [2024-07-26 11:05:19.394274] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:59.983 [2024-07-26 11:05:19.395278] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:59.983 [2024-07-26 11:05:19.396283] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:59.983 [2024-07-26 11:05:19.397287] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:59.983 [2024-07-26 11:05:19.398293] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:59.983 [2024-07-26 11:05:19.399300] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:59.983 [2024-07-26 11:05:19.400308] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:59.983 [2024-07-26 11:05:19.400317] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f0880236000 00:16:59.983 [2024-07-26 11:05:19.401335] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:59.983 [2024-07-26 11:05:19.413051] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:59.983 [2024-07-26 11:05:19.413072] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:16:59.983 [2024-07-26 11:05:19.418159] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:59.983 [2024-07-26 11:05:19.418199] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:59.983 [2024-07-26 11:05:19.418269] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:16:59.983 [2024-07-26 11:05:19.418283] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:16:59.983 [2024-07-26 11:05:19.418288] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:16:59.983 [2024-07-26 11:05:19.419169] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:59.983 [2024-07-26 11:05:19.419181] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:16:59.983 [2024-07-26 11:05:19.419189] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:16:59.983 [2024-07-26 11:05:19.420173] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:59.983 [2024-07-26 11:05:19.420182] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:16:59.983 [2024-07-26 11:05:19.420189] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:16:59.983 [2024-07-26 11:05:19.421181] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:59.983 [2024-07-26 11:05:19.421189] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:59.983 [2024-07-26 11:05:19.422187] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:59.983 [2024-07-26 11:05:19.422197] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:16:59.983 [2024-07-26 11:05:19.422202] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:16:59.983 [2024-07-26 11:05:19.422208] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:59.983 [2024-07-26 11:05:19.422313] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:16:59.983 [2024-07-26 11:05:19.422317] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:59.983 [2024-07-26 11:05:19.422321] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:59.983 [2024-07-26 11:05:19.423193] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:59.983 [2024-07-26 11:05:19.424255] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:59.983 [2024-07-26 11:05:19.425266] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:59.983 [2024-07-26 11:05:19.426267] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:59.983 [2024-07-26 11:05:19.426304] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:59.983 [2024-07-26 11:05:19.427276] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:59.983 [2024-07-26 11:05:19.427284] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:59.983 [2024-07-26 11:05:19.427289] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:16:59.983 [2024-07-26 11:05:19.427306] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:16:59.983 [2024-07-26 11:05:19.427313] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:16:59.983 [2024-07-26 11:05:19.427324] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:59.983 [2024-07-26 11:05:19.427328] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:59.984 [2024-07-26 11:05:19.427331] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:59.984 [2024-07-26 11:05:19.427342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:59.984 [2024-07-26 11:05:19.436050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:59.984 [2024-07-26 11:05:19.436061] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:16:59.984 [2024-07-26 11:05:19.436065] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:16:59.984 [2024-07-26 11:05:19.436069] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:16:59.984 [2024-07-26 11:05:19.436073] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:59.984 [2024-07-26 11:05:19.436080] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:16:59.984 [2024-07-26 11:05:19.436084] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:16:59.984 [2024-07-26 11:05:19.436088] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:16:59.984 [2024-07-26 11:05:19.436095] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:16:59.984 [2024-07-26 11:05:19.436106] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:59.984 [2024-07-26 11:05:19.444048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:59.984 [2024-07-26 11:05:19.444061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:59.984 [2024-07-26 11:05:19.444070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:59.984 [2024-07-26 11:05:19.444077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:59.984 [2024-07-26 11:05:19.444084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:59.984 [2024-07-26 11:05:19.444089] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:16:59.984 [2024-07-26 11:05:19.444096] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:59.984 [2024-07-26 11:05:19.444104] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:59.984 [2024-07-26 11:05:19.452049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:59.984 [2024-07-26 11:05:19.452058] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:16:59.984 [2024-07-26 11:05:19.452064] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:59.984 [2024-07-26 11:05:19.452073] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:16:59.984 [2024-07-26 11:05:19.452080] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:16:59.984 [2024-07-26 11:05:19.452089] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:59.984 [2024-07-26 11:05:19.460049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:59.984 [2024-07-26 11:05:19.460107] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:16:59.984 [2024-07-26 11:05:19.460116] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:16:59.984 [2024-07-26 11:05:19.460123] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:59.984 [2024-07-26 11:05:19.460130] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:59.984 [2024-07-26 11:05:19.460135] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:59.984 [2024-07-26 11:05:19.460141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:59.984 [2024-07-26 11:05:19.468048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:59.984 [2024-07-26 11:05:19.468061] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:16:59.984 [2024-07-26 11:05:19.468071] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:16:59.984 [2024-07-26 11:05:19.468077] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:16:59.984 [2024-07-26 11:05:19.468084] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:59.984 [2024-07-26 11:05:19.468088] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:59.984 [2024-07-26 11:05:19.468092] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:59.984 [2024-07-26 11:05:19.468097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:59.984 [2024-07-26 11:05:19.476051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:59.984 [2024-07-26 11:05:19.476067] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:59.984 [2024-07-26 11:05:19.476074] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:59.984 [2024-07-26 11:05:19.476081] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:59.984 [2024-07-26 11:05:19.476085] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:59.984 [2024-07-26 11:05:19.476087] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:59.984 [2024-07-26 11:05:19.476093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:00.248 [2024-07-26 11:05:19.484050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:00.248 [2024-07-26 11:05:19.484061] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:00.248 [2024-07-26 11:05:19.484067] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:17:00.248 [2024-07-26 11:05:19.484074] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:17:00.248 [2024-07-26 11:05:19.484080] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:17:00.248 [2024-07-26 11:05:19.484085] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:00.248 [2024-07-26 11:05:19.484089] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:17:00.248 [2024-07-26 11:05:19.484094] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:17:00.248 [2024-07-26 11:05:19.484098] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:17:00.248 [2024-07-26 11:05:19.484102] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:17:00.248 [2024-07-26 11:05:19.484120] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:00.248 [2024-07-26 11:05:19.492047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:00.248 [2024-07-26 11:05:19.492060] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:00.248 [2024-07-26 11:05:19.500048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:00.248 [2024-07-26 11:05:19.500060] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:00.248 [2024-07-26 11:05:19.508048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:00.248 [2024-07-26 11:05:19.508060] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:00.248 [2024-07-26 11:05:19.516052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:00.248 [2024-07-26 11:05:19.516070] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:00.248 [2024-07-26 11:05:19.516075] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:00.248 [2024-07-26 11:05:19.516078] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:00.248 [2024-07-26 11:05:19.516081] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:00.248 [2024-07-26 11:05:19.516084] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:00.248 [2024-07-26 11:05:19.516090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:00.248 [2024-07-26 11:05:19.516097] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:00.248 [2024-07-26 11:05:19.516101] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:00.248 [2024-07-26 11:05:19.516104] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:00.248 [2024-07-26 11:05:19.516109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:00.248 [2024-07-26 11:05:19.516115] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:00.248 [2024-07-26 11:05:19.516119] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:00.248 [2024-07-26 11:05:19.516122] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:00.248 [2024-07-26 11:05:19.516128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:00.248 [2024-07-26 11:05:19.516135] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:00.248 [2024-07-26 11:05:19.516138] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:00.248 [2024-07-26 11:05:19.516141] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:00.248 [2024-07-26 11:05:19.516147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:00.248 [2024-07-26 11:05:19.524051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:00.248 [2024-07-26 11:05:19.524065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:00.248 [2024-07-26 11:05:19.524077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:00.248 [2024-07-26 11:05:19.524083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:00.248 ===================================================== 00:17:00.248 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:00.248 ===================================================== 00:17:00.248 Controller Capabilities/Features 00:17:00.248 ================================ 00:17:00.248 Vendor ID: 4e58 00:17:00.248 Subsystem Vendor ID: 4e58 00:17:00.248 Serial Number: SPDK2 00:17:00.248 Model Number: SPDK bdev Controller 00:17:00.248 Firmware Version: 24.09 00:17:00.248 Recommended Arb Burst: 6 00:17:00.248 IEEE OUI Identifier: 8d 6b 50 00:17:00.248 Multi-path I/O 00:17:00.248 May have multiple subsystem ports: Yes 00:17:00.248 May have multiple controllers: Yes 00:17:00.248 Associated with SR-IOV VF: No 00:17:00.248 Max Data Transfer Size: 131072 00:17:00.248 Max Number of Namespaces: 32 00:17:00.248 Max Number of I/O Queues: 127 00:17:00.249 NVMe Specification Version (VS): 1.3 00:17:00.249 NVMe Specification Version (Identify): 1.3 00:17:00.249 Maximum Queue Entries: 256 00:17:00.249 Contiguous Queues Required: Yes 00:17:00.249 Arbitration Mechanisms Supported 00:17:00.249 Weighted Round Robin: Not Supported 00:17:00.249 Vendor Specific: Not Supported 00:17:00.249 Reset Timeout: 15000 ms 00:17:00.249 Doorbell Stride: 4 bytes 00:17:00.249 NVM Subsystem Reset: Not Supported 00:17:00.249 Command Sets Supported 00:17:00.249 NVM Command Set: Supported 00:17:00.249 Boot Partition: Not Supported 00:17:00.249 Memory Page Size Minimum: 4096 bytes 00:17:00.249 Memory Page Size Maximum: 4096 bytes 00:17:00.249 Persistent Memory Region: Not Supported 00:17:00.249 Optional Asynchronous Events Supported 00:17:00.249 Namespace Attribute Notices: Supported 00:17:00.249 Firmware Activation Notices: Not Supported 00:17:00.249 ANA Change Notices: Not Supported 00:17:00.249 PLE Aggregate Log Change Notices: Not Supported 00:17:00.249 LBA Status Info Alert Notices: Not Supported 00:17:00.249 EGE Aggregate Log Change Notices: Not Supported 00:17:00.249 Normal NVM Subsystem Shutdown event: Not Supported 00:17:00.249 Zone Descriptor Change Notices: Not Supported 00:17:00.249 Discovery Log Change Notices: Not Supported 00:17:00.249 Controller Attributes 00:17:00.249 128-bit Host Identifier: Supported 00:17:00.249 Non-Operational Permissive Mode: Not Supported 00:17:00.249 NVM Sets: Not Supported 00:17:00.249 Read Recovery Levels: Not Supported 00:17:00.249 Endurance Groups: Not Supported 00:17:00.249 Predictable Latency Mode: Not Supported 00:17:00.249 Traffic Based Keep ALive: Not Supported 00:17:00.249 Namespace Granularity: Not Supported 00:17:00.249 SQ Associations: Not Supported 00:17:00.249 UUID List: Not Supported 00:17:00.249 Multi-Domain Subsystem: Not Supported 00:17:00.249 Fixed Capacity Management: Not Supported 00:17:00.249 Variable Capacity Management: Not Supported 00:17:00.249 Delete Endurance Group: Not Supported 00:17:00.249 Delete NVM Set: Not Supported 00:17:00.249 Extended LBA Formats Supported: Not Supported 00:17:00.249 Flexible Data Placement Supported: Not Supported 00:17:00.249 00:17:00.249 Controller Memory Buffer Support 00:17:00.249 ================================ 00:17:00.249 Supported: No 00:17:00.249 00:17:00.249 Persistent Memory Region Support 00:17:00.249 ================================ 00:17:00.249 Supported: No 00:17:00.249 00:17:00.249 Admin Command Set Attributes 00:17:00.249 ============================ 00:17:00.249 Security Send/Receive: Not Supported 00:17:00.249 Format NVM: Not Supported 00:17:00.249 Firmware Activate/Download: Not Supported 00:17:00.249 Namespace Management: Not Supported 00:17:00.249 Device Self-Test: Not Supported 00:17:00.249 Directives: Not Supported 00:17:00.249 NVMe-MI: Not Supported 00:17:00.249 Virtualization Management: Not Supported 00:17:00.249 Doorbell Buffer Config: Not Supported 00:17:00.249 Get LBA Status Capability: Not Supported 00:17:00.249 Command & Feature Lockdown Capability: Not Supported 00:17:00.249 Abort Command Limit: 4 00:17:00.249 Async Event Request Limit: 4 00:17:00.249 Number of Firmware Slots: N/A 00:17:00.249 Firmware Slot 1 Read-Only: N/A 00:17:00.249 Firmware Activation Without Reset: N/A 00:17:00.249 Multiple Update Detection Support: N/A 00:17:00.249 Firmware Update Granularity: No Information Provided 00:17:00.249 Per-Namespace SMART Log: No 00:17:00.249 Asymmetric Namespace Access Log Page: Not Supported 00:17:00.249 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:17:00.249 Command Effects Log Page: Supported 00:17:00.249 Get Log Page Extended Data: Supported 00:17:00.249 Telemetry Log Pages: Not Supported 00:17:00.249 Persistent Event Log Pages: Not Supported 00:17:00.249 Supported Log Pages Log Page: May Support 00:17:00.249 Commands Supported & Effects Log Page: Not Supported 00:17:00.249 Feature Identifiers & Effects Log Page:May Support 00:17:00.249 NVMe-MI Commands & Effects Log Page: May Support 00:17:00.249 Data Area 4 for Telemetry Log: Not Supported 00:17:00.249 Error Log Page Entries Supported: 128 00:17:00.249 Keep Alive: Supported 00:17:00.249 Keep Alive Granularity: 10000 ms 00:17:00.249 00:17:00.249 NVM Command Set Attributes 00:17:00.249 ========================== 00:17:00.249 Submission Queue Entry Size 00:17:00.249 Max: 64 00:17:00.249 Min: 64 00:17:00.249 Completion Queue Entry Size 00:17:00.249 Max: 16 00:17:00.249 Min: 16 00:17:00.249 Number of Namespaces: 32 00:17:00.249 Compare Command: Supported 00:17:00.249 Write Uncorrectable Command: Not Supported 00:17:00.249 Dataset Management Command: Supported 00:17:00.249 Write Zeroes Command: Supported 00:17:00.249 Set Features Save Field: Not Supported 00:17:00.249 Reservations: Not Supported 00:17:00.249 Timestamp: Not Supported 00:17:00.249 Copy: Supported 00:17:00.249 Volatile Write Cache: Present 00:17:00.249 Atomic Write Unit (Normal): 1 00:17:00.249 Atomic Write Unit (PFail): 1 00:17:00.249 Atomic Compare & Write Unit: 1 00:17:00.249 Fused Compare & Write: Supported 00:17:00.249 Scatter-Gather List 00:17:00.249 SGL Command Set: Supported (Dword aligned) 00:17:00.249 SGL Keyed: Not Supported 00:17:00.249 SGL Bit Bucket Descriptor: Not Supported 00:17:00.249 SGL Metadata Pointer: Not Supported 00:17:00.249 Oversized SGL: Not Supported 00:17:00.249 SGL Metadata Address: Not Supported 00:17:00.249 SGL Offset: Not Supported 00:17:00.249 Transport SGL Data Block: Not Supported 00:17:00.249 Replay Protected Memory Block: Not Supported 00:17:00.249 00:17:00.249 Firmware Slot Information 00:17:00.249 ========================= 00:17:00.249 Active slot: 1 00:17:00.249 Slot 1 Firmware Revision: 24.09 00:17:00.249 00:17:00.249 00:17:00.249 Commands Supported and Effects 00:17:00.249 ============================== 00:17:00.249 Admin Commands 00:17:00.249 -------------- 00:17:00.249 Get Log Page (02h): Supported 00:17:00.249 Identify (06h): Supported 00:17:00.249 Abort (08h): Supported 00:17:00.249 Set Features (09h): Supported 00:17:00.249 Get Features (0Ah): Supported 00:17:00.249 Asynchronous Event Request (0Ch): Supported 00:17:00.249 Keep Alive (18h): Supported 00:17:00.249 I/O Commands 00:17:00.249 ------------ 00:17:00.249 Flush (00h): Supported LBA-Change 00:17:00.249 Write (01h): Supported LBA-Change 00:17:00.249 Read (02h): Supported 00:17:00.249 Compare (05h): Supported 00:17:00.249 Write Zeroes (08h): Supported LBA-Change 00:17:00.249 Dataset Management (09h): Supported LBA-Change 00:17:00.249 Copy (19h): Supported LBA-Change 00:17:00.249 00:17:00.249 Error Log 00:17:00.249 ========= 00:17:00.249 00:17:00.249 Arbitration 00:17:00.249 =========== 00:17:00.249 Arbitration Burst: 1 00:17:00.249 00:17:00.249 Power Management 00:17:00.249 ================ 00:17:00.249 Number of Power States: 1 00:17:00.249 Current Power State: Power State #0 00:17:00.249 Power State #0: 00:17:00.249 Max Power: 0.00 W 00:17:00.249 Non-Operational State: Operational 00:17:00.249 Entry Latency: Not Reported 00:17:00.249 Exit Latency: Not Reported 00:17:00.249 Relative Read Throughput: 0 00:17:00.249 Relative Read Latency: 0 00:17:00.249 Relative Write Throughput: 0 00:17:00.249 Relative Write Latency: 0 00:17:00.249 Idle Power: Not Reported 00:17:00.249 Active Power: Not Reported 00:17:00.249 Non-Operational Permissive Mode: Not Supported 00:17:00.249 00:17:00.249 Health Information 00:17:00.249 ================== 00:17:00.249 Critical Warnings: 00:17:00.249 Available Spare Space: OK 00:17:00.249 Temperature: OK 00:17:00.249 Device Reliability: OK 00:17:00.249 Read Only: No 00:17:00.249 Volatile Memory Backup: OK 00:17:00.249 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:00.249 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:00.249 Available Spare: 0% 00:17:00.249 Available Sp[2024-07-26 11:05:19.524170] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:00.249 [2024-07-26 11:05:19.532050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:00.249 [2024-07-26 11:05:19.532078] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:17:00.249 [2024-07-26 11:05:19.532086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.249 [2024-07-26 11:05:19.532091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.250 [2024-07-26 11:05:19.532097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.250 [2024-07-26 11:05:19.532103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:00.250 [2024-07-26 11:05:19.532142] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:00.250 [2024-07-26 11:05:19.532152] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:17:00.250 [2024-07-26 11:05:19.533145] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:00.250 [2024-07-26 11:05:19.533189] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:17:00.250 [2024-07-26 11:05:19.533196] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:17:00.250 [2024-07-26 11:05:19.534150] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:17:00.250 [2024-07-26 11:05:19.534161] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:17:00.250 [2024-07-26 11:05:19.534206] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:17:00.250 [2024-07-26 11:05:19.535187] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:00.250 are Threshold: 0% 00:17:00.250 Life Percentage Used: 0% 00:17:00.250 Data Units Read: 0 00:17:00.250 Data Units Written: 0 00:17:00.250 Host Read Commands: 0 00:17:00.250 Host Write Commands: 0 00:17:00.250 Controller Busy Time: 0 minutes 00:17:00.250 Power Cycles: 0 00:17:00.250 Power On Hours: 0 hours 00:17:00.250 Unsafe Shutdowns: 0 00:17:00.250 Unrecoverable Media Errors: 0 00:17:00.250 Lifetime Error Log Entries: 0 00:17:00.250 Warning Temperature Time: 0 minutes 00:17:00.250 Critical Temperature Time: 0 minutes 00:17:00.250 00:17:00.250 Number of Queues 00:17:00.250 ================ 00:17:00.250 Number of I/O Submission Queues: 127 00:17:00.250 Number of I/O Completion Queues: 127 00:17:00.250 00:17:00.250 Active Namespaces 00:17:00.250 ================= 00:17:00.250 Namespace ID:1 00:17:00.250 Error Recovery Timeout: Unlimited 00:17:00.250 Command Set Identifier: NVM (00h) 00:17:00.250 Deallocate: Supported 00:17:00.250 Deallocated/Unwritten Error: Not Supported 00:17:00.250 Deallocated Read Value: Unknown 00:17:00.250 Deallocate in Write Zeroes: Not Supported 00:17:00.250 Deallocated Guard Field: 0xFFFF 00:17:00.250 Flush: Supported 00:17:00.250 Reservation: Supported 00:17:00.250 Namespace Sharing Capabilities: Multiple Controllers 00:17:00.250 Size (in LBAs): 131072 (0GiB) 00:17:00.250 Capacity (in LBAs): 131072 (0GiB) 00:17:00.250 Utilization (in LBAs): 131072 (0GiB) 00:17:00.250 NGUID: 4D1E4A31A93E48A1939AC768353EF8BD 00:17:00.250 UUID: 4d1e4a31-a93e-48a1-939a-c768353ef8bd 00:17:00.250 Thin Provisioning: Not Supported 00:17:00.250 Per-NS Atomic Units: Yes 00:17:00.250 Atomic Boundary Size (Normal): 0 00:17:00.250 Atomic Boundary Size (PFail): 0 00:17:00.250 Atomic Boundary Offset: 0 00:17:00.250 Maximum Single Source Range Length: 65535 00:17:00.250 Maximum Copy Length: 65535 00:17:00.250 Maximum Source Range Count: 1 00:17:00.250 NGUID/EUI64 Never Reused: No 00:17:00.250 Namespace Write Protected: No 00:17:00.250 Number of LBA Formats: 1 00:17:00.250 Current LBA Format: LBA Format #00 00:17:00.250 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:00.250 00:17:00.250 11:05:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:00.250 EAL: No free 2048 kB hugepages reported on node 1 00:17:00.544 [2024-07-26 11:05:19.750442] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:05.824 Initializing NVMe Controllers 00:17:05.824 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:05.824 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:05.824 Initialization complete. Launching workers. 00:17:05.824 ======================================================== 00:17:05.824 Latency(us) 00:17:05.824 Device Information : IOPS MiB/s Average min max 00:17:05.824 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39927.67 155.97 3205.41 977.55 7150.21 00:17:05.824 ======================================================== 00:17:05.824 Total : 39927.67 155.97 3205.41 977.55 7150.21 00:17:05.824 00:17:05.824 [2024-07-26 11:05:24.858290] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:05.824 11:05:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:05.824 EAL: No free 2048 kB hugepages reported on node 1 00:17:05.824 [2024-07-26 11:05:25.073943] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:11.106 Initializing NVMe Controllers 00:17:11.106 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:11.106 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:11.106 Initialization complete. Launching workers. 00:17:11.106 ======================================================== 00:17:11.106 Latency(us) 00:17:11.106 Device Information : IOPS MiB/s Average min max 00:17:11.106 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39878.05 155.77 3209.62 983.17 10213.49 00:17:11.106 ======================================================== 00:17:11.106 Total : 39878.05 155.77 3209.62 983.17 10213.49 00:17:11.106 00:17:11.106 [2024-07-26 11:05:30.095535] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:11.106 11:05:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:11.106 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.106 [2024-07-26 11:05:30.294002] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:16.388 [2024-07-26 11:05:35.436136] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:16.388 Initializing NVMe Controllers 00:17:16.388 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:16.388 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:16.388 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:17:16.388 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:17:16.388 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:17:16.388 Initialization complete. Launching workers. 00:17:16.388 Starting thread on core 2 00:17:16.388 Starting thread on core 3 00:17:16.388 Starting thread on core 1 00:17:16.388 11:05:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:17:16.388 EAL: No free 2048 kB hugepages reported on node 1 00:17:16.388 [2024-07-26 11:05:35.717473] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:19.683 [2024-07-26 11:05:38.796621] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:19.683 Initializing NVMe Controllers 00:17:19.683 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:19.683 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:19.683 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:17:19.683 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:17:19.683 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:17:19.683 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:17:19.683 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:19.683 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:19.683 Initialization complete. Launching workers. 00:17:19.683 Starting thread on core 1 with urgent priority queue 00:17:19.683 Starting thread on core 2 with urgent priority queue 00:17:19.683 Starting thread on core 3 with urgent priority queue 00:17:19.683 Starting thread on core 0 with urgent priority queue 00:17:19.683 SPDK bdev Controller (SPDK2 ) core 0: 5719.67 IO/s 17.48 secs/100000 ios 00:17:19.683 SPDK bdev Controller (SPDK2 ) core 1: 6464.33 IO/s 15.47 secs/100000 ios 00:17:19.683 SPDK bdev Controller (SPDK2 ) core 2: 5323.67 IO/s 18.78 secs/100000 ios 00:17:19.683 SPDK bdev Controller (SPDK2 ) core 3: 5522.00 IO/s 18.11 secs/100000 ios 00:17:19.683 ======================================================== 00:17:19.683 00:17:19.683 11:05:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:19.683 EAL: No free 2048 kB hugepages reported on node 1 00:17:19.683 [2024-07-26 11:05:39.068506] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:19.683 Initializing NVMe Controllers 00:17:19.683 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:19.683 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:19.683 Namespace ID: 1 size: 0GB 00:17:19.683 Initialization complete. 00:17:19.683 INFO: using host memory buffer for IO 00:17:19.683 Hello world! 00:17:19.683 [2024-07-26 11:05:39.080598] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:19.683 11:05:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:19.683 EAL: No free 2048 kB hugepages reported on node 1 00:17:19.943 [2024-07-26 11:05:39.343032] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:21.326 Initializing NVMe Controllers 00:17:21.326 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:21.326 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:21.326 Initialization complete. Launching workers. 00:17:21.326 submit (in ns) avg, min, max = 6985.7, 3277.4, 4995710.4 00:17:21.326 complete (in ns) avg, min, max = 18583.1, 1800.0, 4000700.9 00:17:21.326 00:17:21.326 Submit histogram 00:17:21.326 ================ 00:17:21.326 Range in us Cumulative Count 00:17:21.326 3.270 - 3.283: 0.0123% ( 2) 00:17:21.326 3.283 - 3.297: 0.1847% ( 28) 00:17:21.326 3.297 - 3.311: 1.5639% ( 224) 00:17:21.326 3.311 - 3.325: 5.3260% ( 611) 00:17:21.326 3.325 - 3.339: 10.2210% ( 795) 00:17:21.326 3.339 - 3.353: 15.7872% ( 904) 00:17:21.326 3.353 - 3.367: 21.8829% ( 990) 00:17:21.326 3.367 - 3.381: 27.7384% ( 951) 00:17:21.326 3.381 - 3.395: 32.8428% ( 829) 00:17:21.326 3.395 - 3.409: 37.8056% ( 806) 00:17:21.326 3.409 - 3.423: 42.6883% ( 793) 00:17:21.326 3.423 - 3.437: 46.7089% ( 653) 00:17:21.326 3.437 - 3.450: 50.9390% ( 687) 00:17:21.326 3.450 - 3.464: 56.2158% ( 857) 00:17:21.326 3.464 - 3.478: 62.4284% ( 1009) 00:17:21.326 3.478 - 3.492: 67.2619% ( 785) 00:17:21.326 3.492 - 3.506: 72.1323% ( 791) 00:17:21.326 3.506 - 3.520: 76.8426% ( 765) 00:17:21.326 3.520 - 3.534: 80.5308% ( 599) 00:17:21.326 3.534 - 3.548: 83.0860% ( 415) 00:17:21.326 3.548 - 3.562: 84.6376% ( 252) 00:17:21.326 3.562 - 3.590: 86.0846% ( 235) 00:17:21.326 3.590 - 3.617: 87.2914% ( 196) 00:17:21.326 3.617 - 3.645: 88.8492% ( 253) 00:17:21.326 3.645 - 3.673: 90.6533% ( 293) 00:17:21.326 3.673 - 3.701: 92.2788% ( 264) 00:17:21.326 3.701 - 3.729: 94.1075% ( 297) 00:17:21.326 3.729 - 3.757: 95.8562% ( 284) 00:17:21.326 3.757 - 3.784: 97.1738% ( 214) 00:17:21.326 3.784 - 3.812: 98.0358% ( 140) 00:17:21.326 3.812 - 3.840: 98.6639% ( 102) 00:17:21.326 3.840 - 3.868: 99.0148% ( 57) 00:17:21.326 3.868 - 3.896: 99.1503% ( 22) 00:17:21.326 3.896 - 3.923: 99.2550% ( 17) 00:17:21.326 3.923 - 3.951: 99.2981% ( 7) 00:17:21.326 3.951 - 3.979: 99.3473% ( 8) 00:17:21.326 3.979 - 4.007: 99.3596% ( 2) 00:17:21.326 4.007 - 4.035: 99.3781% ( 3) 00:17:21.326 4.035 - 4.063: 99.4027% ( 4) 00:17:21.326 4.063 - 4.090: 99.4089% ( 1) 00:17:21.326 4.090 - 4.118: 99.4274% ( 3) 00:17:21.326 4.118 - 4.146: 99.4520% ( 4) 00:17:21.326 4.146 - 4.174: 99.4643% ( 2) 00:17:21.326 4.230 - 4.257: 99.4705% ( 1) 00:17:21.326 4.257 - 4.285: 99.4766% ( 1) 00:17:21.326 4.285 - 4.313: 99.4828% ( 1) 00:17:21.326 4.313 - 4.341: 99.4951% ( 2) 00:17:21.326 4.341 - 4.369: 99.5136% ( 3) 00:17:21.326 4.369 - 4.397: 99.5320% ( 3) 00:17:21.326 4.452 - 4.480: 99.5382% ( 1) 00:17:21.326 4.480 - 4.508: 99.5444% ( 1) 00:17:21.326 4.508 - 4.536: 99.5567% ( 2) 00:17:21.326 4.730 - 4.758: 99.5628% ( 1) 00:17:21.326 4.814 - 4.842: 99.5690% ( 1) 00:17:21.326 4.897 - 4.925: 99.5751% ( 1) 00:17:21.326 5.398 - 5.426: 99.5813% ( 1) 00:17:21.326 5.510 - 5.537: 99.5875% ( 1) 00:17:21.326 5.649 - 5.677: 99.5936% ( 1) 00:17:21.326 5.871 - 5.899: 99.5998% ( 1) 00:17:21.326 6.066 - 6.094: 99.6059% ( 1) 00:17:21.326 6.094 - 6.122: 99.6121% ( 1) 00:17:21.326 6.233 - 6.261: 99.6183% ( 1) 00:17:21.326 6.289 - 6.317: 99.6306% ( 2) 00:17:21.326 6.372 - 6.400: 99.6367% ( 1) 00:17:21.326 6.456 - 6.483: 99.6552% ( 3) 00:17:21.326 6.483 - 6.511: 99.6614% ( 1) 00:17:21.326 6.511 - 6.539: 99.6675% ( 1) 00:17:21.326 6.539 - 6.567: 99.6737% ( 1) 00:17:21.326 6.595 - 6.623: 99.6798% ( 1) 00:17:21.326 6.623 - 6.650: 99.6921% ( 2) 00:17:21.326 6.650 - 6.678: 99.7045% ( 2) 00:17:21.326 6.762 - 6.790: 99.7106% ( 1) 00:17:21.326 6.790 - 6.817: 99.7168% ( 1) 00:17:21.326 6.817 - 6.845: 99.7291% ( 2) 00:17:21.326 6.984 - 7.012: 99.7352% ( 1) 00:17:21.326 7.096 - 7.123: 99.7414% ( 1) 00:17:21.326 7.123 - 7.179: 99.7537% ( 2) 00:17:21.326 7.179 - 7.235: 99.7660% ( 2) 00:17:21.326 7.235 - 7.290: 99.7722% ( 1) 00:17:21.326 7.346 - 7.402: 99.7783% ( 1) 00:17:21.326 7.513 - 7.569: 99.7907% ( 2) 00:17:21.326 7.569 - 7.624: 99.8030% ( 2) 00:17:21.326 7.624 - 7.680: 99.8153% ( 2) 00:17:21.326 7.680 - 7.736: 99.8214% ( 1) 00:17:21.326 7.736 - 7.791: 99.8276% ( 1) 00:17:21.326 7.791 - 7.847: 99.8338% ( 1) 00:17:21.326 7.847 - 7.903: 99.8399% ( 1) 00:17:21.326 8.125 - 8.181: 99.8461% ( 1) 00:17:21.326 8.181 - 8.237: 99.8522% ( 1) 00:17:21.326 8.515 - 8.570: 99.8584% ( 1) 00:17:21.326 8.570 - 8.626: 99.8645% ( 1) 00:17:21.326 8.793 - 8.849: 99.8707% ( 1) 00:17:21.326 8.849 - 8.904: 99.8769% ( 1) 00:17:21.326 9.461 - 9.517: 99.8830% ( 1) 00:17:21.326 9.572 - 9.628: 99.8892% ( 1) 00:17:21.326 9.795 - 9.850: 99.8953% ( 1) 00:17:21.326 10.574 - 10.630: 99.9015% ( 1) 00:17:21.326 11.297 - 11.353: 99.9076% ( 1) 00:17:21.326 26.379 - 26.490: 99.9138% ( 1) 00:17:21.326 3989.148 - 4017.642: 99.9938% ( 13) 00:17:21.326 4986.435 - 5014.929: 100.0000% ( 1) 00:17:21.326 00:17:21.326 Complete histogram 00:17:21.326 ================== 00:17:21.326 Range in us Cumulative Count 00:17:21.326 1.795 - 1.809: 0.0985% ( 16) 00:17:21.326 1.809 - 1.823: 10.9538% ( 1763) 00:17:21.326 1.823 - 1.837: 57.9583% ( 7634) 00:17:21.326 1.837 - 1.850: 71.9783% ( 2277) 00:17:21.326 1.850 - 1.864: 74.8230% ( 462) 00:17:21.326 1.864 - 1.878: 80.4569% ( 915) 00:17:21.326 1.878 - 1.892: 92.3157% ( 1926) 00:17:21.326 1.892 - 1.906: 95.9485% ( 590) 00:17:21.326 1.906 - 1.920: 97.3093% ( 221) 00:17:21.327 1.920 - 1.934: 97.8450% ( 87) 00:17:21.327 1.934 - 1.948: 98.0605% ( 35) 00:17:21.327 1.948 - 1.962: 98.3375% ( 45) 00:17:21.327 1.962 - 1.976: 98.4853% ( 24) 00:17:21.327 1.976 - 1.990: 98.5592% ( 12) 00:17:21.327 1.990 - 2.003: 98.6269% ( 11) 00:17:21.327 2.003 - 2.017: 98.7131% ( 14) 00:17:21.327 2.017 - 2.031: 98.7685% ( 9) 00:17:21.327 2.031 - 2.045: 98.8055% ( 6) 00:17:21.327 2.045 - 2.059: 98.8671% ( 10) 00:17:21.327 2.059 - 2.073: 98.9102% ( 7) 00:17:21.327 2.073 - 2.087: 98.9533% ( 7) 00:17:21.327 2.087 - 2.101: 98.9779% ( 4) 00:17:21.327 2.101 - 2.115: 99.0087% ( 5) 00:17:21.327 2.115 - 2.129: 99.0272% ( 3) 00:17:21.327 2.129 - 2.143: 99.0518% ( 4) 00:17:21.327 2.143 - 2.157: 99.0641% ( 2) 00:17:21.327 2.157 - 2.170: 99.0764% ( 2) 00:17:21.327 2.170 - 2.184: 99.0826% ( 1) 00:17:21.327 2.212 - 2.226: 99.0949% ( 2) 00:17:21.327 2.226 - 2.240: 99.1072% ( 2) 00:17:21.327 2.240 - 2.254: 99.1134% ( 1) 00:17:21.327 2.254 - 2.268: 99.1257% ( 2) 00:17:21.327 2.268 - 2.282: 99.1380% ( 2) 00:17:21.327 2.282 - 2.296: 99.1503% ( 2) 00:17:21.327 2.296 - 2.310: 99.1626% ( 2) 00:17:21.327 2.310 - 2.323: 99.1872% ( 4) 00:17:21.327 2.323 - 2.337: 99.2119% ( 4) 00:17:21.327 2.351 - 2.365: 99.2180% ( 1) 00:17:21.327 2.365 - 2.379: 99.2242% ( 1) 00:17:21.327 2.393 - 2.407: 99.2303% ( 1) 00:17:21.327 2.407 - 2.421: 99.2365% ( 1) 00:17:21.327 2.421 - 2.435: 99.2427% ( 1) 00:17:21.327 2.449 - 2.463: 99.2550% ( 2) 00:17:21.327 2.477 - 2.490: 99.2611% ( 1) 00:17:21.327 2.504 - 2.518: 99.2673% ( 1) 00:17:21.327 2.546 - 2.560: 99.2796% ( 2) 00:17:21.327 2.602 - 2.616: 99.2919% ( 2) 00:17:21.327 2.685 - 2.699: 99.2981% ( 1) 00:17:21.327 2.797 - 2.810: 99.3042% ( 1) 00:17:21.327 3.047 - 3.061: 99.3104% ( 1) 00:17:21.327 3.673 - 3.701: 99.3165% ( 1) 00:17:21.327 3.840 - 3.868: 99.3227% ( 1) 00:17:21.327 4.202 - 4.230: 99.3289% ( 1) 00:17:21.327 4.230 - 4.257: 99.3350% ( 1) 00:17:21.327 4.341 - 4.369: 99.3412% ( 1) 00:17:21.327 4.452 - 4.480: 99.3473% ( 1) 00:17:21.327 4.536 - 4.563: 99.3535% ( 1) 00:17:21.327 4.619 - 4.647: 99.3596% ( 1) 00:17:21.327 4.703 - 4.730: 99.3658% ( 1) 00:17:21.327 4.870 - 4.897: 99.3781% ( 2) 00:17:21.327 4.953 - 4.981: 99.3843% ( 1) 00:17:21.327 5.037 - 5.064: 99.3966% ( 2) 00:17:21.327 5.064 - 5.092: 99.4089% ( 2) 00:17:21.327 5.092 - 5.1[2024-07-26 11:05:40.436150] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:21.327 20: 99.4151% ( 1) 00:17:21.327 5.203 - 5.231: 99.4274% ( 2) 00:17:21.327 5.259 - 5.287: 99.4335% ( 1) 00:17:21.327 5.343 - 5.370: 99.4397% ( 1) 00:17:21.327 5.454 - 5.482: 99.4458% ( 1) 00:17:21.327 5.482 - 5.510: 99.4520% ( 1) 00:17:21.327 5.537 - 5.565: 99.4582% ( 1) 00:17:21.327 5.593 - 5.621: 99.4643% ( 1) 00:17:21.327 5.760 - 5.788: 99.4705% ( 1) 00:17:21.327 5.816 - 5.843: 99.4766% ( 1) 00:17:21.327 5.843 - 5.871: 99.4828% ( 1) 00:17:21.327 6.122 - 6.150: 99.4889% ( 1) 00:17:21.327 6.205 - 6.233: 99.4951% ( 1) 00:17:21.327 6.483 - 6.511: 99.5013% ( 1) 00:17:21.327 6.567 - 6.595: 99.5074% ( 1) 00:17:21.327 6.984 - 7.012: 99.5136% ( 1) 00:17:21.327 7.346 - 7.402: 99.5197% ( 1) 00:17:21.327 7.791 - 7.847: 99.5259% ( 1) 00:17:21.327 8.515 - 8.570: 99.5320% ( 1) 00:17:21.327 8.570 - 8.626: 99.5382% ( 1) 00:17:21.327 8.849 - 8.904: 99.5444% ( 1) 00:17:21.327 13.635 - 13.690: 99.5505% ( 1) 00:17:21.327 13.969 - 14.024: 99.5567% ( 1) 00:17:21.327 14.358 - 14.470: 99.5628% ( 1) 00:17:21.327 18.365 - 18.477: 99.5690% ( 1) 00:17:21.327 18.477 - 18.588: 99.5751% ( 1) 00:17:21.327 29.607 - 29.830: 99.5813% ( 1) 00:17:21.327 3903.666 - 3932.160: 99.5875% ( 1) 00:17:21.327 3989.148 - 4017.642: 100.0000% ( 67) 00:17:21.327 00:17:21.327 11:05:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:17:21.327 11:05:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:21.327 11:05:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:17:21.327 11:05:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:17:21.327 11:05:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:21.327 [ 00:17:21.327 { 00:17:21.327 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:21.327 "subtype": "Discovery", 00:17:21.327 "listen_addresses": [], 00:17:21.327 "allow_any_host": true, 00:17:21.327 "hosts": [] 00:17:21.327 }, 00:17:21.327 { 00:17:21.327 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:21.327 "subtype": "NVMe", 00:17:21.327 "listen_addresses": [ 00:17:21.327 { 00:17:21.327 "trtype": "VFIOUSER", 00:17:21.327 "adrfam": "IPv4", 00:17:21.327 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:21.327 "trsvcid": "0" 00:17:21.327 } 00:17:21.327 ], 00:17:21.327 "allow_any_host": true, 00:17:21.327 "hosts": [], 00:17:21.327 "serial_number": "SPDK1", 00:17:21.327 "model_number": "SPDK bdev Controller", 00:17:21.327 "max_namespaces": 32, 00:17:21.327 "min_cntlid": 1, 00:17:21.327 "max_cntlid": 65519, 00:17:21.327 "namespaces": [ 00:17:21.327 { 00:17:21.327 "nsid": 1, 00:17:21.327 "bdev_name": "Malloc1", 00:17:21.327 "name": "Malloc1", 00:17:21.327 "nguid": "5EC1CDB78DE84AA8854E05BDA883A684", 00:17:21.327 "uuid": "5ec1cdb7-8de8-4aa8-854e-05bda883a684" 00:17:21.327 }, 00:17:21.327 { 00:17:21.327 "nsid": 2, 00:17:21.327 "bdev_name": "Malloc3", 00:17:21.327 "name": "Malloc3", 00:17:21.327 "nguid": "65613E3DD6E1469D97E5B191FFEB26DA", 00:17:21.327 "uuid": "65613e3d-d6e1-469d-97e5-b191ffeb26da" 00:17:21.327 } 00:17:21.327 ] 00:17:21.327 }, 00:17:21.327 { 00:17:21.327 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:21.327 "subtype": "NVMe", 00:17:21.327 "listen_addresses": [ 00:17:21.327 { 00:17:21.327 "trtype": "VFIOUSER", 00:17:21.327 "adrfam": "IPv4", 00:17:21.327 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:21.327 "trsvcid": "0" 00:17:21.327 } 00:17:21.327 ], 00:17:21.327 "allow_any_host": true, 00:17:21.327 "hosts": [], 00:17:21.327 "serial_number": "SPDK2", 00:17:21.327 "model_number": "SPDK bdev Controller", 00:17:21.327 "max_namespaces": 32, 00:17:21.327 "min_cntlid": 1, 00:17:21.327 "max_cntlid": 65519, 00:17:21.327 "namespaces": [ 00:17:21.327 { 00:17:21.327 "nsid": 1, 00:17:21.327 "bdev_name": "Malloc2", 00:17:21.327 "name": "Malloc2", 00:17:21.327 "nguid": "4D1E4A31A93E48A1939AC768353EF8BD", 00:17:21.327 "uuid": "4d1e4a31-a93e-48a1-939a-c768353ef8bd" 00:17:21.327 } 00:17:21.327 ] 00:17:21.327 } 00:17:21.327 ] 00:17:21.327 11:05:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:21.327 11:05:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1438985 00:17:21.327 11:05:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:21.327 11:05:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:17:21.327 11:05:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:17:21.327 11:05:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:21.327 11:05:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:21.327 11:05:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:17:21.327 11:05:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:21.327 11:05:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:17:21.327 EAL: No free 2048 kB hugepages reported on node 1 00:17:21.327 [2024-07-26 11:05:40.818574] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:21.588 Malloc4 00:17:21.588 11:05:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:17:21.588 [2024-07-26 11:05:41.037189] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:21.588 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:21.588 Asynchronous Event Request test 00:17:21.588 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:21.588 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:21.588 Registering asynchronous event callbacks... 00:17:21.588 Starting namespace attribute notice tests for all controllers... 00:17:21.588 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:21.588 aer_cb - Changed Namespace 00:17:21.588 Cleaning up... 00:17:21.848 [ 00:17:21.848 { 00:17:21.848 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:21.848 "subtype": "Discovery", 00:17:21.848 "listen_addresses": [], 00:17:21.848 "allow_any_host": true, 00:17:21.848 "hosts": [] 00:17:21.848 }, 00:17:21.848 { 00:17:21.848 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:21.848 "subtype": "NVMe", 00:17:21.848 "listen_addresses": [ 00:17:21.848 { 00:17:21.848 "trtype": "VFIOUSER", 00:17:21.848 "adrfam": "IPv4", 00:17:21.848 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:21.848 "trsvcid": "0" 00:17:21.848 } 00:17:21.848 ], 00:17:21.848 "allow_any_host": true, 00:17:21.848 "hosts": [], 00:17:21.848 "serial_number": "SPDK1", 00:17:21.848 "model_number": "SPDK bdev Controller", 00:17:21.848 "max_namespaces": 32, 00:17:21.848 "min_cntlid": 1, 00:17:21.848 "max_cntlid": 65519, 00:17:21.848 "namespaces": [ 00:17:21.848 { 00:17:21.848 "nsid": 1, 00:17:21.848 "bdev_name": "Malloc1", 00:17:21.848 "name": "Malloc1", 00:17:21.848 "nguid": "5EC1CDB78DE84AA8854E05BDA883A684", 00:17:21.848 "uuid": "5ec1cdb7-8de8-4aa8-854e-05bda883a684" 00:17:21.848 }, 00:17:21.848 { 00:17:21.848 "nsid": 2, 00:17:21.848 "bdev_name": "Malloc3", 00:17:21.848 "name": "Malloc3", 00:17:21.848 "nguid": "65613E3DD6E1469D97E5B191FFEB26DA", 00:17:21.848 "uuid": "65613e3d-d6e1-469d-97e5-b191ffeb26da" 00:17:21.848 } 00:17:21.848 ] 00:17:21.848 }, 00:17:21.848 { 00:17:21.848 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:21.848 "subtype": "NVMe", 00:17:21.848 "listen_addresses": [ 00:17:21.848 { 00:17:21.848 "trtype": "VFIOUSER", 00:17:21.848 "adrfam": "IPv4", 00:17:21.848 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:21.848 "trsvcid": "0" 00:17:21.848 } 00:17:21.848 ], 00:17:21.848 "allow_any_host": true, 00:17:21.848 "hosts": [], 00:17:21.848 "serial_number": "SPDK2", 00:17:21.848 "model_number": "SPDK bdev Controller", 00:17:21.848 "max_namespaces": 32, 00:17:21.848 "min_cntlid": 1, 00:17:21.848 "max_cntlid": 65519, 00:17:21.848 "namespaces": [ 00:17:21.848 { 00:17:21.848 "nsid": 1, 00:17:21.848 "bdev_name": "Malloc2", 00:17:21.848 "name": "Malloc2", 00:17:21.848 "nguid": "4D1E4A31A93E48A1939AC768353EF8BD", 00:17:21.848 "uuid": "4d1e4a31-a93e-48a1-939a-c768353ef8bd" 00:17:21.848 }, 00:17:21.848 { 00:17:21.848 "nsid": 2, 00:17:21.848 "bdev_name": "Malloc4", 00:17:21.848 "name": "Malloc4", 00:17:21.848 "nguid": "D0D2414BA4C24FB684F08323DACCFCAE", 00:17:21.848 "uuid": "d0d2414b-a4c2-4fb6-84f0-8323daccfcae" 00:17:21.848 } 00:17:21.848 ] 00:17:21.848 } 00:17:21.848 ] 00:17:21.848 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1438985 00:17:21.848 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:17:21.848 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1431279 00:17:21.848 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1431279 ']' 00:17:21.848 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1431279 00:17:21.848 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:17:21.848 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:21.848 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1431279 00:17:21.848 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:21.849 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:21.849 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1431279' 00:17:21.849 killing process with pid 1431279 00:17:21.849 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1431279 00:17:21.849 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1431279 00:17:22.109 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:22.109 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:22.109 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:17:22.109 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:17:22.109 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:17:22.109 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1439130 00:17:22.109 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1439130' 00:17:22.109 Process pid: 1439130 00:17:22.109 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:17:22.109 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:22.109 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1439130 00:17:22.109 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1439130 ']' 00:17:22.109 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.109 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:22.109 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.109 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:22.109 11:05:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:22.109 [2024-07-26 11:05:41.597130] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:17:22.109 [2024-07-26 11:05:41.597962] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:22.109 [2024-07-26 11:05:41.598001] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.370 EAL: No free 2048 kB hugepages reported on node 1 00:17:22.370 [2024-07-26 11:05:41.653119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:22.370 [2024-07-26 11:05:41.733105] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:22.370 [2024-07-26 11:05:41.733144] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:22.370 [2024-07-26 11:05:41.733151] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:22.370 [2024-07-26 11:05:41.733157] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:22.370 [2024-07-26 11:05:41.733162] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:22.370 [2024-07-26 11:05:41.733225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:22.370 [2024-07-26 11:05:41.733320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:22.370 [2024-07-26 11:05:41.733405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:22.370 [2024-07-26 11:05:41.733407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.370 [2024-07-26 11:05:41.810171] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:17:22.370 [2024-07-26 11:05:41.810268] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:17:22.370 [2024-07-26 11:05:41.810443] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:17:22.370 [2024-07-26 11:05:41.810833] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:17:22.370 [2024-07-26 11:05:41.811087] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:17:22.940 11:05:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:22.940 11:05:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:17:22.940 11:05:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:24.324 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:17:24.324 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:24.324 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:24.324 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:24.324 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:24.324 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:24.324 Malloc1 00:17:24.324 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:24.585 11:05:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:24.845 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:24.845 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:24.845 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:24.845 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:25.104 Malloc2 00:17:25.104 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:25.363 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:25.622 11:05:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:25.622 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:17:25.622 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1439130 00:17:25.622 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1439130 ']' 00:17:25.622 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1439130 00:17:25.622 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:17:25.622 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:25.622 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1439130 00:17:25.622 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:25.622 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:25.622 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1439130' 00:17:25.622 killing process with pid 1439130 00:17:25.883 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1439130 00:17:25.883 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1439130 00:17:25.883 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:25.883 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:25.883 00:17:25.883 real 0m51.303s 00:17:25.883 user 3m23.157s 00:17:25.883 sys 0m3.555s 00:17:25.883 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:25.883 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:25.883 ************************************ 00:17:25.883 END TEST nvmf_vfio_user 00:17:25.883 ************************************ 00:17:25.883 11:05:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:25.883 11:05:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:25.883 11:05:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:25.883 11:05:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:26.144 ************************************ 00:17:26.144 START TEST nvmf_vfio_user_nvme_compliance 00:17:26.144 ************************************ 00:17:26.144 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:26.144 * Looking for test storage... 00:17:26.144 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:17:26.144 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:26.144 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:17:26.144 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:26.144 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:26.144 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:26.144 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:26.144 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:26.144 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:26.144 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:26.144 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:26.144 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:26.144 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:26.144 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:26.144 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:26.144 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:26.144 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:26.144 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:26.144 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:26.144 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:26.144 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:26.144 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:26.144 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:26.144 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.144 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.145 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.145 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:17:26.145 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.145 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:17:26.145 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:26.145 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:26.145 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:26.145 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:26.145 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:26.145 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:26.145 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:26.145 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:26.145 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:26.145 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:26.145 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:17:26.145 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:17:26.145 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:17:26.145 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1439888 00:17:26.145 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1439888' 00:17:26.145 Process pid: 1439888 00:17:26.145 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:26.145 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:26.145 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1439888 00:17:26.145 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 1439888 ']' 00:17:26.145 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.145 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:26.145 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.145 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:26.145 11:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:26.145 [2024-07-26 11:05:45.573949] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:26.145 [2024-07-26 11:05:45.573992] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:26.145 EAL: No free 2048 kB hugepages reported on node 1 00:17:26.145 [2024-07-26 11:05:45.628961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:26.405 [2024-07-26 11:05:45.701883] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:26.405 [2024-07-26 11:05:45.701924] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:26.405 [2024-07-26 11:05:45.701932] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:26.405 [2024-07-26 11:05:45.701937] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:26.405 [2024-07-26 11:05:45.701942] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:26.405 [2024-07-26 11:05:45.702005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.405 [2024-07-26 11:05:45.702090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:26.405 [2024-07-26 11:05:45.702091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.973 11:05:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:26.973 11:05:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:17:26.973 11:05:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:17:27.915 11:05:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:27.915 11:05:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:17:27.915 11:05:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:27.915 11:05:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.915 11:05:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:27.915 11:05:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.915 11:05:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:17:27.915 11:05:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:27.915 11:05:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.915 11:05:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:28.234 malloc0 00:17:28.234 11:05:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.234 11:05:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:17:28.234 11:05:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.234 11:05:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:28.234 11:05:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.234 11:05:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:28.234 11:05:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.234 11:05:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:28.234 11:05:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.234 11:05:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:28.234 11:05:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.234 11:05:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:28.234 11:05:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.234 11:05:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:17:28.234 EAL: No free 2048 kB hugepages reported on node 1 00:17:28.234 00:17:28.234 00:17:28.234 CUnit - A unit testing framework for C - Version 2.1-3 00:17:28.234 http://cunit.sourceforge.net/ 00:17:28.234 00:17:28.234 00:17:28.234 Suite: nvme_compliance 00:17:28.234 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-26 11:05:47.601498] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:28.234 [2024-07-26 11:05:47.602835] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:17:28.234 [2024-07-26 11:05:47.602850] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:17:28.234 [2024-07-26 11:05:47.602857] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:17:28.234 [2024-07-26 11:05:47.604520] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:28.234 passed 00:17:28.234 Test: admin_identify_ctrlr_verify_fused ...[2024-07-26 11:05:47.684119] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:28.234 [2024-07-26 11:05:47.687143] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:28.495 passed 00:17:28.495 Test: admin_identify_ns ...[2024-07-26 11:05:47.767204] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:28.495 [2024-07-26 11:05:47.829053] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:17:28.495 [2024-07-26 11:05:47.837058] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:17:28.495 [2024-07-26 11:05:47.858155] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:28.495 passed 00:17:28.495 Test: admin_get_features_mandatory_features ...[2024-07-26 11:05:47.935474] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:28.495 [2024-07-26 11:05:47.938497] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:28.495 passed 00:17:28.755 Test: admin_get_features_optional_features ...[2024-07-26 11:05:48.019032] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:28.755 [2024-07-26 11:05:48.022056] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:28.755 passed 00:17:28.755 Test: admin_set_features_number_of_queues ...[2024-07-26 11:05:48.100542] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:28.755 [2024-07-26 11:05:48.206139] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:28.755 passed 00:17:29.015 Test: admin_get_log_page_mandatory_logs ...[2024-07-26 11:05:48.281254] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:29.015 [2024-07-26 11:05:48.284276] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:29.015 passed 00:17:29.015 Test: admin_get_log_page_with_lpo ...[2024-07-26 11:05:48.362514] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:29.015 [2024-07-26 11:05:48.434056] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:17:29.015 [2024-07-26 11:05:48.447106] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:29.015 passed 00:17:29.275 Test: fabric_property_get ...[2024-07-26 11:05:48.522230] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:29.275 [2024-07-26 11:05:48.523462] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:17:29.275 [2024-07-26 11:05:48.525256] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:29.275 passed 00:17:29.275 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-26 11:05:48.603785] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:29.275 [2024-07-26 11:05:48.605020] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:17:29.275 [2024-07-26 11:05:48.606808] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:29.275 passed 00:17:29.275 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-26 11:05:48.684678] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:29.275 [2024-07-26 11:05:48.770058] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:29.535 [2024-07-26 11:05:48.786049] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:29.535 [2024-07-26 11:05:48.791138] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:29.535 passed 00:17:29.535 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-26 11:05:48.866284] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:29.535 [2024-07-26 11:05:48.867523] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:17:29.535 [2024-07-26 11:05:48.870314] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:29.535 passed 00:17:29.535 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-26 11:05:48.948256] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:29.535 [2024-07-26 11:05:49.025054] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:29.796 [2024-07-26 11:05:49.049053] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:29.796 [2024-07-26 11:05:49.054141] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:29.796 passed 00:17:29.796 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-26 11:05:49.129420] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:29.796 [2024-07-26 11:05:49.130653] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:17:29.796 [2024-07-26 11:05:49.130676] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:17:29.796 [2024-07-26 11:05:49.132441] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:29.796 passed 00:17:29.796 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-26 11:05:49.210405] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:30.056 [2024-07-26 11:05:49.303051] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:17:30.056 [2024-07-26 11:05:49.311049] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:17:30.056 [2024-07-26 11:05:49.319055] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:17:30.056 [2024-07-26 11:05:49.327050] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:17:30.056 [2024-07-26 11:05:49.356126] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:30.056 passed 00:17:30.056 Test: admin_create_io_sq_verify_pc ...[2024-07-26 11:05:49.433230] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:30.056 [2024-07-26 11:05:49.450058] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:17:30.056 [2024-07-26 11:05:49.467407] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:30.056 passed 00:17:30.056 Test: admin_create_io_qp_max_qps ...[2024-07-26 11:05:49.547959] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:31.434 [2024-07-26 11:05:50.646053] nvme_ctrlr.c:5469:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:17:31.694 [2024-07-26 11:05:51.026900] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:31.694 passed 00:17:31.694 Test: admin_create_io_sq_shared_cq ...[2024-07-26 11:05:51.103097] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:31.953 [2024-07-26 11:05:51.236052] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:31.953 [2024-07-26 11:05:51.273120] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:31.953 passed 00:17:31.953 00:17:31.953 Run Summary: Type Total Ran Passed Failed Inactive 00:17:31.953 suites 1 1 n/a 0 0 00:17:31.953 tests 18 18 18 0 0 00:17:31.953 asserts 360 360 360 0 n/a 00:17:31.953 00:17:31.953 Elapsed time = 1.509 seconds 00:17:31.953 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1439888 00:17:31.953 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 1439888 ']' 00:17:31.953 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 1439888 00:17:31.953 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:17:31.953 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:31.953 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1439888 00:17:31.953 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:31.953 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:31.953 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1439888' 00:17:31.953 killing process with pid 1439888 00:17:31.953 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 1439888 00:17:31.953 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 1439888 00:17:32.212 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:17:32.212 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:32.212 00:17:32.212 real 0m6.150s 00:17:32.212 user 0m17.621s 00:17:32.212 sys 0m0.426s 00:17:32.212 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:32.212 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:32.212 ************************************ 00:17:32.212 END TEST nvmf_vfio_user_nvme_compliance 00:17:32.212 ************************************ 00:17:32.212 11:05:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:32.212 11:05:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:32.212 11:05:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:32.212 11:05:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:32.212 ************************************ 00:17:32.212 START TEST nvmf_vfio_user_fuzz 00:17:32.212 ************************************ 00:17:32.212 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:32.474 * Looking for test storage... 00:17:32.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:32.474 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:32.474 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:17:32.474 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:32.474 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:32.474 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:32.474 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:32.474 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:32.474 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:32.474 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:32.474 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:32.474 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:32.474 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:32.474 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:32.474 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:32.474 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:32.474 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:32.474 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:32.474 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:32.474 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:32.474 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:32.474 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:32.474 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:32.475 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.475 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.475 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.475 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:17:32.475 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.475 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:17:32.475 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:32.475 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:32.475 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:32.475 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:32.475 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:32.475 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:32.475 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:32.475 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:32.475 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:32.475 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:32.475 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:32.475 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:17:32.475 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:32.475 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:32.475 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:17:32.475 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1440893 00:17:32.475 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1440893' 00:17:32.475 Process pid: 1440893 00:17:32.475 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:32.475 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1440893 00:17:32.475 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:32.475 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 1440893 ']' 00:17:32.475 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.475 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:32.475 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.475 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:32.475 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:33.414 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:33.414 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:17:33.414 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:17:34.368 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:34.368 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.368 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:34.368 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.368 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:17:34.368 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:34.368 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.368 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:34.368 malloc0 00:17:34.368 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.368 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:17:34.368 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.368 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:34.368 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.368 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:34.368 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.368 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:34.368 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.368 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:34.368 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.368 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:34.368 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.368 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:17:34.368 11:05:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:18:06.465 Fuzzing completed. Shutting down the fuzz application 00:18:06.465 00:18:06.465 Dumping successful admin opcodes: 00:18:06.465 8, 9, 10, 24, 00:18:06.465 Dumping successful io opcodes: 00:18:06.465 0, 00:18:06.465 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1079731, total successful commands: 4256, random_seed: 3388522752 00:18:06.465 NS: 0x200003a1ef00 admin qp, Total commands completed: 269123, total successful commands: 2167, random_seed: 3407070144 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1440893 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 1440893 ']' 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 1440893 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1440893 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1440893' 00:18:06.465 killing process with pid 1440893 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 1440893 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 1440893 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:18:06.465 00:18:06.465 real 0m32.764s 00:18:06.465 user 0m32.179s 00:18:06.465 sys 0m30.400s 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:06.465 ************************************ 00:18:06.465 END TEST nvmf_vfio_user_fuzz 00:18:06.465 ************************************ 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:06.465 ************************************ 00:18:06.465 START TEST nvmf_auth_target 00:18:06.465 ************************************ 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:06.465 * Looking for test storage... 00:18:06.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:06.465 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.466 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.466 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.466 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:06.466 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.466 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:18:06.466 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:06.466 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:06.466 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:06.466 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:06.466 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:06.466 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:06.466 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:06.466 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:06.466 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:06.466 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:06.466 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:06.466 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:06.466 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:06.466 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:06.466 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:06.466 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:18:06.466 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:06.466 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:06.466 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:06.466 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:06.466 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:06.466 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.466 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:06.466 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:06.466 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:06.466 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:06.466 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:06.466 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.763 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:09.763 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:09.763 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:09.763 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:09.763 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:09.763 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:09.763 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:09.763 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:09.763 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:09.763 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:18:09.763 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:09.763 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:18:09.763 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:09.763 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:18:09.763 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:09.763 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:09.763 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:09.763 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:09.763 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:09.763 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:09.763 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:09.763 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:09.763 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:09.763 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:09.763 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:09.763 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:09.763 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:09.763 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:09.763 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:09.763 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:09.763 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:09.763 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:09.763 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:09.763 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:09.763 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:09.763 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:09.763 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:09.763 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:09.763 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:09.763 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:09.763 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:09.763 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:09.763 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:09.763 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:09.763 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:09.764 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:09.764 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:09.764 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:09.764 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:09.764 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:09.764 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:09.764 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:09.764 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:09.764 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:09.764 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:09.764 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:09.764 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:09.764 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:09.764 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:09.764 Found net devices under 0000:86:00.0: cvl_0_0 00:18:09.764 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:09.764 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:09.764 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:09.764 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:09.764 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:09.764 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:09.764 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:09.764 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:09.764 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:09.764 Found net devices under 0000:86:00.1: cvl_0_1 00:18:09.764 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:09.764 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:09.764 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:09.764 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:09.764 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:09.764 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:09.764 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:09.764 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:09.764 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:09.764 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:09.764 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:09.764 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:09.764 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:09.764 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:09.764 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:09.764 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:09.764 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:09.764 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:09.764 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:10.024 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:10.024 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:10.024 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:10.024 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:10.024 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:10.024 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:10.024 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:10.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:10.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:18:10.024 00:18:10.024 --- 10.0.0.2 ping statistics --- 00:18:10.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.024 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:18:10.024 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:10.024 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:10.024 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.354 ms 00:18:10.024 00:18:10.024 --- 10.0.0.1 ping statistics --- 00:18:10.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.024 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:18:10.024 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:10.024 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:18:10.024 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:10.024 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:10.024 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:10.024 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:10.024 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:10.024 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:10.024 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:10.024 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:18:10.024 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:10.024 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:10.024 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.024 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1449872 00:18:10.024 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1449872 00:18:10.024 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1449872 ']' 00:18:10.024 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.024 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:10.024 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.284 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:10.284 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:10.284 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.907 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:10.907 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:10.907 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:10.907 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:10.907 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.907 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:10.907 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1449943 00:18:10.907 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:10.907 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1de9199e9ffad0d6bf6bb724ec9abee548583cefbab05b51 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.UUT 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1de9199e9ffad0d6bf6bb724ec9abee548583cefbab05b51 0 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1de9199e9ffad0d6bf6bb724ec9abee548583cefbab05b51 0 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1de9199e9ffad0d6bf6bb724ec9abee548583cefbab05b51 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.UUT 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.UUT 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.UUT 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8fcea49ba0fc8b75a3d77072e3ad32e6f5b01e134ee8d19a17362e5465cdd3ed 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.oI0 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8fcea49ba0fc8b75a3d77072e3ad32e6f5b01e134ee8d19a17362e5465cdd3ed 3 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8fcea49ba0fc8b75a3d77072e3ad32e6f5b01e134ee8d19a17362e5465cdd3ed 3 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8fcea49ba0fc8b75a3d77072e3ad32e6f5b01e134ee8d19a17362e5465cdd3ed 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.oI0 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.oI0 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.oI0 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1cfe5d2044cd70e567a73c34e1d08ccc 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.hL5 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1cfe5d2044cd70e567a73c34e1d08ccc 1 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1cfe5d2044cd70e567a73c34e1d08ccc 1 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1cfe5d2044cd70e567a73c34e1d08ccc 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.hL5 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.hL5 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.hL5 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=849f9bbd600d1bfb9de7d88549b68dc7a12457287ed3f5f4 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.fAO 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 849f9bbd600d1bfb9de7d88549b68dc7a12457287ed3f5f4 2 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 849f9bbd600d1bfb9de7d88549b68dc7a12457287ed3f5f4 2 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=849f9bbd600d1bfb9de7d88549b68dc7a12457287ed3f5f4 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.fAO 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.fAO 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.fAO 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=42c8a0a6ea330aacbe686332664df932fa44547ca87882fe 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.tkk 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 42c8a0a6ea330aacbe686332664df932fa44547ca87882fe 2 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 42c8a0a6ea330aacbe686332664df932fa44547ca87882fe 2 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:11.167 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:11.168 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=42c8a0a6ea330aacbe686332664df932fa44547ca87882fe 00:18:11.168 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:11.168 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:11.426 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.tkk 00:18:11.426 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.tkk 00:18:11.426 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.tkk 00:18:11.426 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:18:11.426 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:11.426 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:11.426 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:11.426 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:11.426 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:11.426 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:11.426 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a1e1e1eb712451082a4965ad6ca3b719 00:18:11.426 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:11.426 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.w5U 00:18:11.426 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a1e1e1eb712451082a4965ad6ca3b719 1 00:18:11.426 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a1e1e1eb712451082a4965ad6ca3b719 1 00:18:11.426 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:11.426 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:11.426 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a1e1e1eb712451082a4965ad6ca3b719 00:18:11.426 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:11.427 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:11.427 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.w5U 00:18:11.427 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.w5U 00:18:11.427 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.w5U 00:18:11.427 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:18:11.427 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:11.427 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:11.427 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:11.427 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:11.427 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:11.427 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:11.427 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=50ff5b5bc4e0d80ca4311727b7ee695ae446a87942a26bfd6312a1f8ebc965e3 00:18:11.427 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:11.427 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.z8v 00:18:11.427 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 50ff5b5bc4e0d80ca4311727b7ee695ae446a87942a26bfd6312a1f8ebc965e3 3 00:18:11.427 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 50ff5b5bc4e0d80ca4311727b7ee695ae446a87942a26bfd6312a1f8ebc965e3 3 00:18:11.427 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:11.427 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:11.427 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=50ff5b5bc4e0d80ca4311727b7ee695ae446a87942a26bfd6312a1f8ebc965e3 00:18:11.427 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:11.427 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:11.427 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.z8v 00:18:11.427 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.z8v 00:18:11.427 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.z8v 00:18:11.427 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:18:11.427 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1449872 00:18:11.427 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1449872 ']' 00:18:11.427 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.427 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:11.427 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.427 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:11.427 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.686 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:11.686 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:11.686 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1449943 /var/tmp/host.sock 00:18:11.686 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1449943 ']' 00:18:11.686 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:18:11.686 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:11.686 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:11.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:11.686 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:11.686 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.686 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:11.686 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:11.686 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:18:11.686 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.686 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.945 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.945 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:11.945 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.UUT 00:18:11.945 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.945 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.946 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.946 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.UUT 00:18:11.946 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.UUT 00:18:11.946 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.oI0 ]] 00:18:11.946 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oI0 00:18:11.946 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.946 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.946 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.946 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oI0 00:18:11.946 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oI0 00:18:12.205 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:12.205 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.hL5 00:18:12.205 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.205 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.205 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.205 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.hL5 00:18:12.205 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.hL5 00:18:12.466 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.fAO ]] 00:18:12.466 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.fAO 00:18:12.466 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.466 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.466 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.466 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.fAO 00:18:12.466 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.fAO 00:18:12.466 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:12.466 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.tkk 00:18:12.466 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.466 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.466 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.466 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.tkk 00:18:12.466 11:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.tkk 00:18:12.726 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.w5U ]] 00:18:12.726 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.w5U 00:18:12.726 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.726 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.726 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.726 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.w5U 00:18:12.726 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.w5U 00:18:12.986 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:12.986 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.z8v 00:18:12.986 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.986 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.986 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.986 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.z8v 00:18:12.986 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.z8v 00:18:12.986 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:18:12.986 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:12.986 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:12.986 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:12.986 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:12.986 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:13.246 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:18:13.246 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.246 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:13.246 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:13.246 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:13.246 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.246 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.246 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.247 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.247 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.247 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.247 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.506 00:18:13.506 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:13.506 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:13.506 11:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.766 11:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.766 11:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.766 11:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.766 11:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.766 11:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.766 11:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:13.766 { 00:18:13.766 "cntlid": 1, 00:18:13.766 "qid": 0, 00:18:13.766 "state": "enabled", 00:18:13.766 "thread": "nvmf_tgt_poll_group_000", 00:18:13.766 "listen_address": { 00:18:13.766 "trtype": "TCP", 00:18:13.766 "adrfam": "IPv4", 00:18:13.766 "traddr": "10.0.0.2", 00:18:13.766 "trsvcid": "4420" 00:18:13.766 }, 00:18:13.766 "peer_address": { 00:18:13.766 "trtype": "TCP", 00:18:13.766 "adrfam": "IPv4", 00:18:13.766 "traddr": "10.0.0.1", 00:18:13.766 "trsvcid": "32836" 00:18:13.766 }, 00:18:13.766 "auth": { 00:18:13.766 "state": "completed", 00:18:13.766 "digest": "sha256", 00:18:13.766 "dhgroup": "null" 00:18:13.766 } 00:18:13.766 } 00:18:13.766 ]' 00:18:13.766 11:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:13.766 11:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:13.766 11:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:13.766 11:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:13.766 11:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:13.766 11:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.766 11:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.766 11:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.025 11:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWRlOTE5OWU5ZmZhZDBkNmJmNmJiNzI0ZWM5YWJlZTU0ODU4M2NlZmJhYjA1YjUxpvK1Zg==: --dhchap-ctrl-secret DHHC-1:03:OGZjZWE0OWJhMGZjOGI3NWEzZDc3MDcyZTNhZDMyZTZmNWIwMWUxMzRlZThkMTlhMTczNjJlNTQ2NWNkZDNlZAhZeCk=: 00:18:14.615 11:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.615 11:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:14.615 11:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.615 11:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.615 11:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.615 11:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:14.615 11:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:14.615 11:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:14.875 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:18:14.875 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:14.875 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:14.875 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:14.875 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:14.875 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.875 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.875 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.875 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.875 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.875 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.875 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.875 00:18:15.135 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:15.135 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.135 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.135 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.135 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.135 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.135 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.135 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.135 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:15.135 { 00:18:15.135 "cntlid": 3, 00:18:15.135 "qid": 0, 00:18:15.135 "state": "enabled", 00:18:15.135 "thread": "nvmf_tgt_poll_group_000", 00:18:15.135 "listen_address": { 00:18:15.135 "trtype": "TCP", 00:18:15.135 "adrfam": "IPv4", 00:18:15.135 "traddr": "10.0.0.2", 00:18:15.135 "trsvcid": "4420" 00:18:15.135 }, 00:18:15.135 "peer_address": { 00:18:15.135 "trtype": "TCP", 00:18:15.135 "adrfam": "IPv4", 00:18:15.135 "traddr": "10.0.0.1", 00:18:15.135 "trsvcid": "32866" 00:18:15.135 }, 00:18:15.135 "auth": { 00:18:15.135 "state": "completed", 00:18:15.135 "digest": "sha256", 00:18:15.135 "dhgroup": "null" 00:18:15.135 } 00:18:15.135 } 00:18:15.135 ]' 00:18:15.135 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:15.135 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:15.135 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:15.395 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:15.395 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:15.395 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.396 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.396 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.396 11:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MWNmZTVkMjA0NGNkNzBlNTY3YTczYzM0ZTFkMDhjY2MX3hQ5: --dhchap-ctrl-secret DHHC-1:02:ODQ5ZjliYmQ2MDBkMWJmYjlkZTdkODg1NDliNjhkYzdhMTI0NTcyODdlZDNmNWY0LdJ6lA==: 00:18:15.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.965 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:15.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:15.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:15.965 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:16.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:18:16.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:16.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:16.225 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:16.226 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:16.226 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.226 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.226 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.226 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.226 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.226 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.226 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.486 00:18:16.486 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:16.486 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:16.486 11:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.746 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.746 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.746 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.746 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.746 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.746 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:16.746 { 00:18:16.746 "cntlid": 5, 00:18:16.746 "qid": 0, 00:18:16.746 "state": "enabled", 00:18:16.746 "thread": "nvmf_tgt_poll_group_000", 00:18:16.746 "listen_address": { 00:18:16.746 "trtype": "TCP", 00:18:16.746 "adrfam": "IPv4", 00:18:16.746 "traddr": "10.0.0.2", 00:18:16.746 "trsvcid": "4420" 00:18:16.746 }, 00:18:16.746 "peer_address": { 00:18:16.746 "trtype": "TCP", 00:18:16.746 "adrfam": "IPv4", 00:18:16.746 "traddr": "10.0.0.1", 00:18:16.746 "trsvcid": "32894" 00:18:16.746 }, 00:18:16.746 "auth": { 00:18:16.746 "state": "completed", 00:18:16.746 "digest": "sha256", 00:18:16.746 "dhgroup": "null" 00:18:16.746 } 00:18:16.746 } 00:18:16.746 ]' 00:18:16.746 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:16.746 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:16.746 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.746 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:16.746 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.746 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.746 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.746 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.006 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NDJjOGEwYTZlYTMzMGFhY2JlNjg2MzMyNjY0ZGY5MzJmYTQ0NTQ3Y2E4Nzg4MmZl9AEgQA==: --dhchap-ctrl-secret DHHC-1:01:YTFlMWUxZWI3MTI0NTEwODJhNDk2NWFkNmNhM2I3MTkCo0Yr: 00:18:17.576 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.576 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:17.576 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.576 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.576 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.576 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:17.576 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:17.576 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:17.576 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:18:17.576 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.576 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:17.576 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:17.576 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:17.576 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.576 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:17.576 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.576 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.576 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.576 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:17.576 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:17.836 00:18:17.836 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:17.836 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.836 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.096 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.096 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.096 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.096 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.096 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.096 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:18.096 { 00:18:18.096 "cntlid": 7, 00:18:18.096 "qid": 0, 00:18:18.096 "state": "enabled", 00:18:18.096 "thread": "nvmf_tgt_poll_group_000", 00:18:18.096 "listen_address": { 00:18:18.096 "trtype": "TCP", 00:18:18.096 "adrfam": "IPv4", 00:18:18.096 "traddr": "10.0.0.2", 00:18:18.096 "trsvcid": "4420" 00:18:18.096 }, 00:18:18.096 "peer_address": { 00:18:18.096 "trtype": "TCP", 00:18:18.096 "adrfam": "IPv4", 00:18:18.096 "traddr": "10.0.0.1", 00:18:18.096 "trsvcid": "60032" 00:18:18.096 }, 00:18:18.096 "auth": { 00:18:18.096 "state": "completed", 00:18:18.096 "digest": "sha256", 00:18:18.096 "dhgroup": "null" 00:18:18.096 } 00:18:18.096 } 00:18:18.096 ]' 00:18:18.096 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:18.096 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:18.096 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:18.096 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:18.096 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:18.356 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.356 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.356 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.356 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTBmZjViNWJjNGUwZDgwY2E0MzExNzI3YjdlZTY5NWFlNDQ2YTg3OTQyYTI2YmZkNjMxMmExZjhlYmM5NjVlMzTaNeM=: 00:18:18.926 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.926 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:18.926 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.926 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.926 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.926 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:18.926 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:18.926 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:18.926 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:19.186 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:18:19.186 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:19.186 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:19.186 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:19.186 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:19.186 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.186 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.186 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.186 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.186 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.186 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.186 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.446 00:18:19.446 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.446 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.446 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.446 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.446 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.446 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.706 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.706 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.706 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:19.706 { 00:18:19.706 "cntlid": 9, 00:18:19.706 "qid": 0, 00:18:19.706 "state": "enabled", 00:18:19.706 "thread": "nvmf_tgt_poll_group_000", 00:18:19.706 "listen_address": { 00:18:19.706 "trtype": "TCP", 00:18:19.706 "adrfam": "IPv4", 00:18:19.706 "traddr": "10.0.0.2", 00:18:19.706 "trsvcid": "4420" 00:18:19.706 }, 00:18:19.706 "peer_address": { 00:18:19.706 "trtype": "TCP", 00:18:19.706 "adrfam": "IPv4", 00:18:19.706 "traddr": "10.0.0.1", 00:18:19.706 "trsvcid": "60054" 00:18:19.706 }, 00:18:19.706 "auth": { 00:18:19.706 "state": "completed", 00:18:19.707 "digest": "sha256", 00:18:19.707 "dhgroup": "ffdhe2048" 00:18:19.707 } 00:18:19.707 } 00:18:19.707 ]' 00:18:19.707 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:19.707 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:19.707 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:19.707 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:19.707 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:19.707 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.707 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.707 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.966 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWRlOTE5OWU5ZmZhZDBkNmJmNmJiNzI0ZWM5YWJlZTU0ODU4M2NlZmJhYjA1YjUxpvK1Zg==: --dhchap-ctrl-secret DHHC-1:03:OGZjZWE0OWJhMGZjOGI3NWEzZDc3MDcyZTNhZDMyZTZmNWIwMWUxMzRlZThkMTlhMTczNjJlNTQ2NWNkZDNlZAhZeCk=: 00:18:20.536 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.536 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:20.536 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.536 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.536 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.536 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:20.536 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:20.536 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:20.537 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:18:20.537 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:20.537 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:20.537 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:20.537 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:20.537 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.537 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.537 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.537 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.537 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.537 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.537 11:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.797 00:18:20.797 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:20.797 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:20.797 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.058 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.058 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.058 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.058 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.058 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.058 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.058 { 00:18:21.058 "cntlid": 11, 00:18:21.058 "qid": 0, 00:18:21.058 "state": "enabled", 00:18:21.058 "thread": "nvmf_tgt_poll_group_000", 00:18:21.058 "listen_address": { 00:18:21.058 "trtype": "TCP", 00:18:21.058 "adrfam": "IPv4", 00:18:21.058 "traddr": "10.0.0.2", 00:18:21.058 "trsvcid": "4420" 00:18:21.058 }, 00:18:21.058 "peer_address": { 00:18:21.058 "trtype": "TCP", 00:18:21.058 "adrfam": "IPv4", 00:18:21.058 "traddr": "10.0.0.1", 00:18:21.058 "trsvcid": "60066" 00:18:21.058 }, 00:18:21.058 "auth": { 00:18:21.058 "state": "completed", 00:18:21.058 "digest": "sha256", 00:18:21.058 "dhgroup": "ffdhe2048" 00:18:21.058 } 00:18:21.058 } 00:18:21.058 ]' 00:18:21.058 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.058 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:21.058 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:21.058 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:21.058 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:21.318 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.318 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.318 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.318 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MWNmZTVkMjA0NGNkNzBlNTY3YTczYzM0ZTFkMDhjY2MX3hQ5: --dhchap-ctrl-secret DHHC-1:02:ODQ5ZjliYmQ2MDBkMWJmYjlkZTdkODg1NDliNjhkYzdhMTI0NTcyODdlZDNmNWY0LdJ6lA==: 00:18:21.888 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.888 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:21.888 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.888 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.888 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.888 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.888 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:21.888 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:22.148 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:18:22.148 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:22.148 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:22.148 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:22.148 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:22.148 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.148 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.148 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.148 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.148 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.148 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.148 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.409 00:18:22.409 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:22.409 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:22.409 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.409 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.409 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.409 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.409 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.669 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.669 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:22.669 { 00:18:22.669 "cntlid": 13, 00:18:22.669 "qid": 0, 00:18:22.669 "state": "enabled", 00:18:22.669 "thread": "nvmf_tgt_poll_group_000", 00:18:22.669 "listen_address": { 00:18:22.669 "trtype": "TCP", 00:18:22.669 "adrfam": "IPv4", 00:18:22.669 "traddr": "10.0.0.2", 00:18:22.669 "trsvcid": "4420" 00:18:22.669 }, 00:18:22.669 "peer_address": { 00:18:22.669 "trtype": "TCP", 00:18:22.669 "adrfam": "IPv4", 00:18:22.669 "traddr": "10.0.0.1", 00:18:22.669 "trsvcid": "60104" 00:18:22.669 }, 00:18:22.669 "auth": { 00:18:22.669 "state": "completed", 00:18:22.669 "digest": "sha256", 00:18:22.669 "dhgroup": "ffdhe2048" 00:18:22.669 } 00:18:22.669 } 00:18:22.669 ]' 00:18:22.669 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:22.669 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:22.669 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.669 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:22.669 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.669 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.669 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.669 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.929 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NDJjOGEwYTZlYTMzMGFhY2JlNjg2MzMyNjY0ZGY5MzJmYTQ0NTQ3Y2E4Nzg4MmZl9AEgQA==: --dhchap-ctrl-secret DHHC-1:01:YTFlMWUxZWI3MTI0NTEwODJhNDk2NWFkNmNhM2I3MTkCo0Yr: 00:18:23.499 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.499 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:23.499 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.499 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.499 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.499 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:23.499 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:23.499 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:23.499 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:18:23.499 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:23.499 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:23.499 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:23.499 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:23.499 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.499 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:23.499 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.499 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.499 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.499 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:23.499 11:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:23.759 00:18:23.759 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.759 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:23.759 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.020 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.020 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.020 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.020 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.020 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.020 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:24.020 { 00:18:24.020 "cntlid": 15, 00:18:24.020 "qid": 0, 00:18:24.020 "state": "enabled", 00:18:24.020 "thread": "nvmf_tgt_poll_group_000", 00:18:24.020 "listen_address": { 00:18:24.020 "trtype": "TCP", 00:18:24.020 "adrfam": "IPv4", 00:18:24.020 "traddr": "10.0.0.2", 00:18:24.020 "trsvcid": "4420" 00:18:24.020 }, 00:18:24.020 "peer_address": { 00:18:24.020 "trtype": "TCP", 00:18:24.020 "adrfam": "IPv4", 00:18:24.020 "traddr": "10.0.0.1", 00:18:24.020 "trsvcid": "60124" 00:18:24.020 }, 00:18:24.020 "auth": { 00:18:24.020 "state": "completed", 00:18:24.020 "digest": "sha256", 00:18:24.020 "dhgroup": "ffdhe2048" 00:18:24.020 } 00:18:24.020 } 00:18:24.020 ]' 00:18:24.020 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:24.020 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:24.020 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:24.020 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:24.020 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:24.279 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.279 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.279 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.279 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTBmZjViNWJjNGUwZDgwY2E0MzExNzI3YjdlZTY5NWFlNDQ2YTg3OTQyYTI2YmZkNjMxMmExZjhlYmM5NjVlMzTaNeM=: 00:18:24.848 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.848 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:24.848 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.848 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.848 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.848 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:24.848 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:24.848 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:24.848 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:25.110 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:18:25.110 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:25.110 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:25.110 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:25.110 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:25.110 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.111 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.111 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.111 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.111 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.111 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.111 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.419 00:18:25.419 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.419 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:25.419 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.419 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.419 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.419 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.419 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.419 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.419 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:25.419 { 00:18:25.419 "cntlid": 17, 00:18:25.419 "qid": 0, 00:18:25.419 "state": "enabled", 00:18:25.419 "thread": "nvmf_tgt_poll_group_000", 00:18:25.419 "listen_address": { 00:18:25.419 "trtype": "TCP", 00:18:25.419 "adrfam": "IPv4", 00:18:25.419 "traddr": "10.0.0.2", 00:18:25.419 "trsvcid": "4420" 00:18:25.419 }, 00:18:25.419 "peer_address": { 00:18:25.419 "trtype": "TCP", 00:18:25.419 "adrfam": "IPv4", 00:18:25.419 "traddr": "10.0.0.1", 00:18:25.419 "trsvcid": "60152" 00:18:25.419 }, 00:18:25.419 "auth": { 00:18:25.419 "state": "completed", 00:18:25.419 "digest": "sha256", 00:18:25.419 "dhgroup": "ffdhe3072" 00:18:25.419 } 00:18:25.419 } 00:18:25.419 ]' 00:18:25.419 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:25.419 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:25.419 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:25.679 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:25.679 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:25.679 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.679 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.679 11:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.938 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWRlOTE5OWU5ZmZhZDBkNmJmNmJiNzI0ZWM5YWJlZTU0ODU4M2NlZmJhYjA1YjUxpvK1Zg==: --dhchap-ctrl-secret DHHC-1:03:OGZjZWE0OWJhMGZjOGI3NWEzZDc3MDcyZTNhZDMyZTZmNWIwMWUxMzRlZThkMTlhMTczNjJlNTQ2NWNkZDNlZAhZeCk=: 00:18:26.198 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.458 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:26.458 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.458 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.458 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.458 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:26.458 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:26.458 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:26.458 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:18:26.458 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:26.458 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:26.458 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:26.458 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:26.458 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.458 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.458 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.458 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.458 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.458 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.458 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.718 00:18:26.718 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:26.718 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:26.718 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.978 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.978 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.978 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.978 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.978 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.978 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:26.978 { 00:18:26.978 "cntlid": 19, 00:18:26.978 "qid": 0, 00:18:26.978 "state": "enabled", 00:18:26.978 "thread": "nvmf_tgt_poll_group_000", 00:18:26.978 "listen_address": { 00:18:26.978 "trtype": "TCP", 00:18:26.978 "adrfam": "IPv4", 00:18:26.978 "traddr": "10.0.0.2", 00:18:26.978 "trsvcid": "4420" 00:18:26.978 }, 00:18:26.978 "peer_address": { 00:18:26.978 "trtype": "TCP", 00:18:26.978 "adrfam": "IPv4", 00:18:26.978 "traddr": "10.0.0.1", 00:18:26.978 "trsvcid": "60184" 00:18:26.978 }, 00:18:26.978 "auth": { 00:18:26.978 "state": "completed", 00:18:26.978 "digest": "sha256", 00:18:26.978 "dhgroup": "ffdhe3072" 00:18:26.978 } 00:18:26.978 } 00:18:26.978 ]' 00:18:26.978 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:26.978 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:26.978 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:26.978 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:26.978 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:26.978 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.978 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.978 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.238 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MWNmZTVkMjA0NGNkNzBlNTY3YTczYzM0ZTFkMDhjY2MX3hQ5: --dhchap-ctrl-secret DHHC-1:02:ODQ5ZjliYmQ2MDBkMWJmYjlkZTdkODg1NDliNjhkYzdhMTI0NTcyODdlZDNmNWY0LdJ6lA==: 00:18:27.808 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.808 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.808 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:27.808 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.808 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.808 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.808 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:27.808 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:27.808 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:28.067 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:18:28.067 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.067 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:28.067 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:28.067 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:28.067 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.067 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.067 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.067 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.067 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.067 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.067 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.327 00:18:28.327 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:28.328 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:28.328 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.328 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.328 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.328 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.328 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.328 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.328 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:28.328 { 00:18:28.328 "cntlid": 21, 00:18:28.328 "qid": 0, 00:18:28.328 "state": "enabled", 00:18:28.328 "thread": "nvmf_tgt_poll_group_000", 00:18:28.328 "listen_address": { 00:18:28.328 "trtype": "TCP", 00:18:28.328 "adrfam": "IPv4", 00:18:28.328 "traddr": "10.0.0.2", 00:18:28.328 "trsvcid": "4420" 00:18:28.328 }, 00:18:28.328 "peer_address": { 00:18:28.328 "trtype": "TCP", 00:18:28.328 "adrfam": "IPv4", 00:18:28.328 "traddr": "10.0.0.1", 00:18:28.328 "trsvcid": "54262" 00:18:28.328 }, 00:18:28.328 "auth": { 00:18:28.328 "state": "completed", 00:18:28.328 "digest": "sha256", 00:18:28.328 "dhgroup": "ffdhe3072" 00:18:28.328 } 00:18:28.328 } 00:18:28.328 ]' 00:18:28.328 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:28.328 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:28.328 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:28.588 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:28.588 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:28.588 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.588 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.588 11:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.588 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NDJjOGEwYTZlYTMzMGFhY2JlNjg2MzMyNjY0ZGY5MzJmYTQ0NTQ3Y2E4Nzg4MmZl9AEgQA==: --dhchap-ctrl-secret DHHC-1:01:YTFlMWUxZWI3MTI0NTEwODJhNDk2NWFkNmNhM2I3MTkCo0Yr: 00:18:29.157 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.157 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:29.157 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.157 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.157 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.157 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:29.157 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:29.157 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:29.418 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:18:29.418 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:29.418 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:29.418 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:29.418 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:29.418 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.418 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:29.418 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.418 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.418 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.418 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:29.418 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:29.678 00:18:29.678 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:29.678 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:29.678 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.939 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.939 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.939 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.939 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.939 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.939 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:29.939 { 00:18:29.939 "cntlid": 23, 00:18:29.939 "qid": 0, 00:18:29.939 "state": "enabled", 00:18:29.939 "thread": "nvmf_tgt_poll_group_000", 00:18:29.939 "listen_address": { 00:18:29.939 "trtype": "TCP", 00:18:29.939 "adrfam": "IPv4", 00:18:29.939 "traddr": "10.0.0.2", 00:18:29.939 "trsvcid": "4420" 00:18:29.939 }, 00:18:29.939 "peer_address": { 00:18:29.939 "trtype": "TCP", 00:18:29.939 "adrfam": "IPv4", 00:18:29.939 "traddr": "10.0.0.1", 00:18:29.939 "trsvcid": "54278" 00:18:29.939 }, 00:18:29.939 "auth": { 00:18:29.939 "state": "completed", 00:18:29.939 "digest": "sha256", 00:18:29.939 "dhgroup": "ffdhe3072" 00:18:29.939 } 00:18:29.939 } 00:18:29.939 ]' 00:18:29.939 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:29.939 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:29.939 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:29.939 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:29.939 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:29.939 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.939 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.939 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.199 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTBmZjViNWJjNGUwZDgwY2E0MzExNzI3YjdlZTY5NWFlNDQ2YTg3OTQyYTI2YmZkNjMxMmExZjhlYmM5NjVlMzTaNeM=: 00:18:30.772 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.772 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:30.772 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.772 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.772 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.772 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:30.772 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:30.772 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:30.772 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:31.033 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:18:31.033 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.033 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:31.033 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:31.033 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:31.033 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.033 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.033 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.033 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.033 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.033 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.033 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.294 00:18:31.294 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:31.294 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:31.294 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.294 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.294 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.294 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.294 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.294 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.294 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:31.294 { 00:18:31.294 "cntlid": 25, 00:18:31.294 "qid": 0, 00:18:31.294 "state": "enabled", 00:18:31.294 "thread": "nvmf_tgt_poll_group_000", 00:18:31.294 "listen_address": { 00:18:31.294 "trtype": "TCP", 00:18:31.294 "adrfam": "IPv4", 00:18:31.294 "traddr": "10.0.0.2", 00:18:31.294 "trsvcid": "4420" 00:18:31.294 }, 00:18:31.294 "peer_address": { 00:18:31.294 "trtype": "TCP", 00:18:31.294 "adrfam": "IPv4", 00:18:31.294 "traddr": "10.0.0.1", 00:18:31.294 "trsvcid": "54316" 00:18:31.294 }, 00:18:31.294 "auth": { 00:18:31.294 "state": "completed", 00:18:31.294 "digest": "sha256", 00:18:31.294 "dhgroup": "ffdhe4096" 00:18:31.294 } 00:18:31.294 } 00:18:31.294 ]' 00:18:31.294 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:31.554 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:31.554 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:31.554 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:31.554 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:31.554 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.554 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.554 11:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.814 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWRlOTE5OWU5ZmZhZDBkNmJmNmJiNzI0ZWM5YWJlZTU0ODU4M2NlZmJhYjA1YjUxpvK1Zg==: --dhchap-ctrl-secret DHHC-1:03:OGZjZWE0OWJhMGZjOGI3NWEzZDc3MDcyZTNhZDMyZTZmNWIwMWUxMzRlZThkMTlhMTczNjJlNTQ2NWNkZDNlZAhZeCk=: 00:18:32.385 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.385 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:32.385 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.385 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.385 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.385 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:32.385 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:32.385 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:32.385 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:18:32.385 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:32.385 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:32.385 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:32.385 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:32.385 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.385 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.385 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.385 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.385 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.385 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.385 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.644 00:18:32.644 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:32.644 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:32.644 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.904 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.904 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.904 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.904 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.904 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.904 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:32.904 { 00:18:32.904 "cntlid": 27, 00:18:32.904 "qid": 0, 00:18:32.904 "state": "enabled", 00:18:32.904 "thread": "nvmf_tgt_poll_group_000", 00:18:32.904 "listen_address": { 00:18:32.904 "trtype": "TCP", 00:18:32.904 "adrfam": "IPv4", 00:18:32.904 "traddr": "10.0.0.2", 00:18:32.904 "trsvcid": "4420" 00:18:32.904 }, 00:18:32.904 "peer_address": { 00:18:32.904 "trtype": "TCP", 00:18:32.904 "adrfam": "IPv4", 00:18:32.904 "traddr": "10.0.0.1", 00:18:32.904 "trsvcid": "54344" 00:18:32.904 }, 00:18:32.904 "auth": { 00:18:32.904 "state": "completed", 00:18:32.904 "digest": "sha256", 00:18:32.904 "dhgroup": "ffdhe4096" 00:18:32.904 } 00:18:32.904 } 00:18:32.904 ]' 00:18:32.904 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:32.904 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:32.904 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.904 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:32.904 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:32.904 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.904 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.904 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.164 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MWNmZTVkMjA0NGNkNzBlNTY3YTczYzM0ZTFkMDhjY2MX3hQ5: --dhchap-ctrl-secret DHHC-1:02:ODQ5ZjliYmQ2MDBkMWJmYjlkZTdkODg1NDliNjhkYzdhMTI0NTcyODdlZDNmNWY0LdJ6lA==: 00:18:33.735 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.735 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:33.735 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.735 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.735 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.735 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:33.735 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:33.735 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:33.995 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:18:33.995 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:33.995 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:33.995 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:33.995 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:33.995 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.996 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.996 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.996 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.996 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.996 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.996 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.255 00:18:34.256 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:34.256 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:34.256 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.256 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.256 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.256 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.256 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.256 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.256 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:34.256 { 00:18:34.256 "cntlid": 29, 00:18:34.256 "qid": 0, 00:18:34.256 "state": "enabled", 00:18:34.256 "thread": "nvmf_tgt_poll_group_000", 00:18:34.256 "listen_address": { 00:18:34.256 "trtype": "TCP", 00:18:34.256 "adrfam": "IPv4", 00:18:34.256 "traddr": "10.0.0.2", 00:18:34.256 "trsvcid": "4420" 00:18:34.256 }, 00:18:34.256 "peer_address": { 00:18:34.256 "trtype": "TCP", 00:18:34.256 "adrfam": "IPv4", 00:18:34.256 "traddr": "10.0.0.1", 00:18:34.256 "trsvcid": "54384" 00:18:34.256 }, 00:18:34.256 "auth": { 00:18:34.256 "state": "completed", 00:18:34.256 "digest": "sha256", 00:18:34.256 "dhgroup": "ffdhe4096" 00:18:34.256 } 00:18:34.256 } 00:18:34.256 ]' 00:18:34.256 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.256 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:34.256 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.516 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:34.516 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.516 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.516 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.516 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.516 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NDJjOGEwYTZlYTMzMGFhY2JlNjg2MzMyNjY0ZGY5MzJmYTQ0NTQ3Y2E4Nzg4MmZl9AEgQA==: --dhchap-ctrl-secret DHHC-1:01:YTFlMWUxZWI3MTI0NTEwODJhNDk2NWFkNmNhM2I3MTkCo0Yr: 00:18:35.085 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.085 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:35.085 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.085 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.085 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.085 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:35.085 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:35.085 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:35.346 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:18:35.346 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:35.346 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:35.346 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:35.346 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:35.346 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.346 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:35.346 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.346 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.346 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.346 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:35.346 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:35.606 00:18:35.606 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:35.606 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:35.606 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.866 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.866 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.866 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.866 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.866 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.866 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:35.866 { 00:18:35.866 "cntlid": 31, 00:18:35.866 "qid": 0, 00:18:35.866 "state": "enabled", 00:18:35.866 "thread": "nvmf_tgt_poll_group_000", 00:18:35.866 "listen_address": { 00:18:35.866 "trtype": "TCP", 00:18:35.866 "adrfam": "IPv4", 00:18:35.866 "traddr": "10.0.0.2", 00:18:35.866 "trsvcid": "4420" 00:18:35.866 }, 00:18:35.866 "peer_address": { 00:18:35.866 "trtype": "TCP", 00:18:35.866 "adrfam": "IPv4", 00:18:35.866 "traddr": "10.0.0.1", 00:18:35.866 "trsvcid": "54414" 00:18:35.866 }, 00:18:35.866 "auth": { 00:18:35.866 "state": "completed", 00:18:35.866 "digest": "sha256", 00:18:35.866 "dhgroup": "ffdhe4096" 00:18:35.866 } 00:18:35.866 } 00:18:35.866 ]' 00:18:35.866 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:35.866 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:35.866 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:35.866 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:35.866 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:35.866 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.866 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.866 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.126 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTBmZjViNWJjNGUwZDgwY2E0MzExNzI3YjdlZTY5NWFlNDQ2YTg3OTQyYTI2YmZkNjMxMmExZjhlYmM5NjVlMzTaNeM=: 00:18:36.697 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.697 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:36.697 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.697 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.697 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.697 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:36.697 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:36.697 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:36.697 11:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:36.697 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:18:36.697 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:36.697 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:36.697 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:36.697 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:36.697 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.697 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.697 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.697 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.697 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.697 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.697 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.267 00:18:37.267 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.267 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.267 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.267 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.267 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.267 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.267 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.267 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.267 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.267 { 00:18:37.267 "cntlid": 33, 00:18:37.267 "qid": 0, 00:18:37.267 "state": "enabled", 00:18:37.267 "thread": "nvmf_tgt_poll_group_000", 00:18:37.267 "listen_address": { 00:18:37.267 "trtype": "TCP", 00:18:37.267 "adrfam": "IPv4", 00:18:37.267 "traddr": "10.0.0.2", 00:18:37.267 "trsvcid": "4420" 00:18:37.267 }, 00:18:37.267 "peer_address": { 00:18:37.267 "trtype": "TCP", 00:18:37.267 "adrfam": "IPv4", 00:18:37.267 "traddr": "10.0.0.1", 00:18:37.267 "trsvcid": "54446" 00:18:37.267 }, 00:18:37.267 "auth": { 00:18:37.267 "state": "completed", 00:18:37.267 "digest": "sha256", 00:18:37.267 "dhgroup": "ffdhe6144" 00:18:37.267 } 00:18:37.267 } 00:18:37.267 ]' 00:18:37.267 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.267 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:37.267 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:37.527 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:37.527 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:37.527 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.527 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.527 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.527 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWRlOTE5OWU5ZmZhZDBkNmJmNmJiNzI0ZWM5YWJlZTU0ODU4M2NlZmJhYjA1YjUxpvK1Zg==: --dhchap-ctrl-secret DHHC-1:03:OGZjZWE0OWJhMGZjOGI3NWEzZDc3MDcyZTNhZDMyZTZmNWIwMWUxMzRlZThkMTlhMTczNjJlNTQ2NWNkZDNlZAhZeCk=: 00:18:38.096 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.096 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:38.096 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.096 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.096 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.096 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:38.096 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:38.096 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:38.356 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:18:38.356 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:38.356 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:38.356 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:38.356 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:38.356 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.356 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.356 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.356 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.356 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.356 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.356 11:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.616 00:18:38.616 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:38.616 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:38.616 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.875 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.875 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.875 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.875 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.875 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.875 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:38.875 { 00:18:38.875 "cntlid": 35, 00:18:38.875 "qid": 0, 00:18:38.875 "state": "enabled", 00:18:38.875 "thread": "nvmf_tgt_poll_group_000", 00:18:38.875 "listen_address": { 00:18:38.875 "trtype": "TCP", 00:18:38.875 "adrfam": "IPv4", 00:18:38.875 "traddr": "10.0.0.2", 00:18:38.875 "trsvcid": "4420" 00:18:38.875 }, 00:18:38.875 "peer_address": { 00:18:38.875 "trtype": "TCP", 00:18:38.875 "adrfam": "IPv4", 00:18:38.875 "traddr": "10.0.0.1", 00:18:38.875 "trsvcid": "44136" 00:18:38.875 }, 00:18:38.875 "auth": { 00:18:38.875 "state": "completed", 00:18:38.875 "digest": "sha256", 00:18:38.875 "dhgroup": "ffdhe6144" 00:18:38.875 } 00:18:38.875 } 00:18:38.875 ]' 00:18:38.875 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:38.875 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:38.875 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:38.875 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:38.875 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:38.875 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.875 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.875 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.136 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MWNmZTVkMjA0NGNkNzBlNTY3YTczYzM0ZTFkMDhjY2MX3hQ5: --dhchap-ctrl-secret DHHC-1:02:ODQ5ZjliYmQ2MDBkMWJmYjlkZTdkODg1NDliNjhkYzdhMTI0NTcyODdlZDNmNWY0LdJ6lA==: 00:18:39.738 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.739 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:39.739 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.739 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.739 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.739 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:39.739 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:39.739 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:39.739 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:18:39.739 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:39.739 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:39.739 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:39.739 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:39.739 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.739 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.739 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.739 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.739 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.739 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.739 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.309 00:18:40.309 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:40.309 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:40.309 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.309 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.309 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.309 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.309 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.309 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.309 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:40.309 { 00:18:40.309 "cntlid": 37, 00:18:40.309 "qid": 0, 00:18:40.309 "state": "enabled", 00:18:40.309 "thread": "nvmf_tgt_poll_group_000", 00:18:40.309 "listen_address": { 00:18:40.309 "trtype": "TCP", 00:18:40.309 "adrfam": "IPv4", 00:18:40.309 "traddr": "10.0.0.2", 00:18:40.309 "trsvcid": "4420" 00:18:40.309 }, 00:18:40.309 "peer_address": { 00:18:40.309 "trtype": "TCP", 00:18:40.309 "adrfam": "IPv4", 00:18:40.309 "traddr": "10.0.0.1", 00:18:40.309 "trsvcid": "44170" 00:18:40.309 }, 00:18:40.309 "auth": { 00:18:40.309 "state": "completed", 00:18:40.309 "digest": "sha256", 00:18:40.309 "dhgroup": "ffdhe6144" 00:18:40.309 } 00:18:40.309 } 00:18:40.309 ]' 00:18:40.309 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:40.309 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:40.309 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:40.569 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:40.569 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:40.569 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.569 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.569 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.569 11:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NDJjOGEwYTZlYTMzMGFhY2JlNjg2MzMyNjY0ZGY5MzJmYTQ0NTQ3Y2E4Nzg4MmZl9AEgQA==: --dhchap-ctrl-secret DHHC-1:01:YTFlMWUxZWI3MTI0NTEwODJhNDk2NWFkNmNhM2I3MTkCo0Yr: 00:18:41.138 11:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.138 11:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:41.138 11:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.138 11:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.138 11:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.138 11:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:41.139 11:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:41.139 11:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:41.397 11:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:18:41.397 11:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:41.397 11:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:41.397 11:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:41.397 11:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:41.397 11:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.397 11:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:41.397 11:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.397 11:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.397 11:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.397 11:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:41.397 11:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:41.656 00:18:41.656 11:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:41.656 11:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:41.656 11:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.916 11:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.916 11:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.916 11:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.916 11:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.916 11:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.916 11:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:41.916 { 00:18:41.916 "cntlid": 39, 00:18:41.916 "qid": 0, 00:18:41.916 "state": "enabled", 00:18:41.916 "thread": "nvmf_tgt_poll_group_000", 00:18:41.916 "listen_address": { 00:18:41.916 "trtype": "TCP", 00:18:41.916 "adrfam": "IPv4", 00:18:41.916 "traddr": "10.0.0.2", 00:18:41.916 "trsvcid": "4420" 00:18:41.916 }, 00:18:41.916 "peer_address": { 00:18:41.916 "trtype": "TCP", 00:18:41.916 "adrfam": "IPv4", 00:18:41.916 "traddr": "10.0.0.1", 00:18:41.916 "trsvcid": "44182" 00:18:41.916 }, 00:18:41.916 "auth": { 00:18:41.916 "state": "completed", 00:18:41.916 "digest": "sha256", 00:18:41.916 "dhgroup": "ffdhe6144" 00:18:41.916 } 00:18:41.916 } 00:18:41.916 ]' 00:18:41.916 11:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:41.916 11:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:41.916 11:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:41.916 11:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:41.916 11:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:42.177 11:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.177 11:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.177 11:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.177 11:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTBmZjViNWJjNGUwZDgwY2E0MzExNzI3YjdlZTY5NWFlNDQ2YTg3OTQyYTI2YmZkNjMxMmExZjhlYmM5NjVlMzTaNeM=: 00:18:42.746 11:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.746 11:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:42.746 11:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.746 11:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.746 11:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.746 11:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:42.746 11:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:42.746 11:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:42.746 11:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:43.007 11:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:18:43.007 11:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:43.007 11:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:43.007 11:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:43.007 11:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:43.007 11:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.007 11:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.007 11:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.007 11:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.007 11:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.007 11:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.007 11:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.577 00:18:43.577 11:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:43.577 11:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:43.577 11:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.577 11:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.577 11:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.577 11:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.577 11:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.577 11:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.577 11:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:43.577 { 00:18:43.577 "cntlid": 41, 00:18:43.577 "qid": 0, 00:18:43.577 "state": "enabled", 00:18:43.577 "thread": "nvmf_tgt_poll_group_000", 00:18:43.577 "listen_address": { 00:18:43.577 "trtype": "TCP", 00:18:43.577 "adrfam": "IPv4", 00:18:43.577 "traddr": "10.0.0.2", 00:18:43.577 "trsvcid": "4420" 00:18:43.577 }, 00:18:43.577 "peer_address": { 00:18:43.577 "trtype": "TCP", 00:18:43.577 "adrfam": "IPv4", 00:18:43.577 "traddr": "10.0.0.1", 00:18:43.577 "trsvcid": "44214" 00:18:43.577 }, 00:18:43.577 "auth": { 00:18:43.577 "state": "completed", 00:18:43.577 "digest": "sha256", 00:18:43.577 "dhgroup": "ffdhe8192" 00:18:43.577 } 00:18:43.577 } 00:18:43.577 ]' 00:18:43.577 11:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:43.577 11:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:43.577 11:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:43.837 11:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:43.837 11:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:43.837 11:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.837 11:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.837 11:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.837 11:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWRlOTE5OWU5ZmZhZDBkNmJmNmJiNzI0ZWM5YWJlZTU0ODU4M2NlZmJhYjA1YjUxpvK1Zg==: --dhchap-ctrl-secret DHHC-1:03:OGZjZWE0OWJhMGZjOGI3NWEzZDc3MDcyZTNhZDMyZTZmNWIwMWUxMzRlZThkMTlhMTczNjJlNTQ2NWNkZDNlZAhZeCk=: 00:18:44.404 11:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.404 11:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:44.404 11:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.404 11:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.404 11:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.404 11:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:44.404 11:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:44.404 11:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:44.662 11:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:18:44.663 11:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:44.663 11:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:44.663 11:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:44.663 11:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:44.663 11:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.663 11:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.663 11:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.663 11:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.663 11:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.663 11:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.663 11:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.229 00:18:45.229 11:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.229 11:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.229 11:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.229 11:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.229 11:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.489 11:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.489 11:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.489 11:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.489 11:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.489 { 00:18:45.489 "cntlid": 43, 00:18:45.489 "qid": 0, 00:18:45.489 "state": "enabled", 00:18:45.489 "thread": "nvmf_tgt_poll_group_000", 00:18:45.489 "listen_address": { 00:18:45.489 "trtype": "TCP", 00:18:45.489 "adrfam": "IPv4", 00:18:45.489 "traddr": "10.0.0.2", 00:18:45.489 "trsvcid": "4420" 00:18:45.489 }, 00:18:45.489 "peer_address": { 00:18:45.489 "trtype": "TCP", 00:18:45.489 "adrfam": "IPv4", 00:18:45.489 "traddr": "10.0.0.1", 00:18:45.489 "trsvcid": "44242" 00:18:45.489 }, 00:18:45.489 "auth": { 00:18:45.489 "state": "completed", 00:18:45.489 "digest": "sha256", 00:18:45.489 "dhgroup": "ffdhe8192" 00:18:45.489 } 00:18:45.489 } 00:18:45.489 ]' 00:18:45.489 11:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.489 11:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:45.489 11:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.489 11:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:45.489 11:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:45.489 11:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.489 11:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.489 11:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.749 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MWNmZTVkMjA0NGNkNzBlNTY3YTczYzM0ZTFkMDhjY2MX3hQ5: --dhchap-ctrl-secret DHHC-1:02:ODQ5ZjliYmQ2MDBkMWJmYjlkZTdkODg1NDliNjhkYzdhMTI0NTcyODdlZDNmNWY0LdJ6lA==: 00:18:46.317 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.318 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:46.318 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.318 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.318 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.318 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:46.318 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:46.318 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:46.318 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:18:46.318 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.318 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:46.318 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:46.318 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:46.318 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.318 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.318 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.318 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.318 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.318 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.318 11:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.885 00:18:46.885 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:46.885 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:46.885 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.144 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.144 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.144 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.144 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.144 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.144 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.144 { 00:18:47.144 "cntlid": 45, 00:18:47.144 "qid": 0, 00:18:47.144 "state": "enabled", 00:18:47.144 "thread": "nvmf_tgt_poll_group_000", 00:18:47.144 "listen_address": { 00:18:47.144 "trtype": "TCP", 00:18:47.144 "adrfam": "IPv4", 00:18:47.144 "traddr": "10.0.0.2", 00:18:47.144 "trsvcid": "4420" 00:18:47.144 }, 00:18:47.144 "peer_address": { 00:18:47.144 "trtype": "TCP", 00:18:47.144 "adrfam": "IPv4", 00:18:47.144 "traddr": "10.0.0.1", 00:18:47.144 "trsvcid": "44250" 00:18:47.144 }, 00:18:47.144 "auth": { 00:18:47.144 "state": "completed", 00:18:47.144 "digest": "sha256", 00:18:47.144 "dhgroup": "ffdhe8192" 00:18:47.144 } 00:18:47.144 } 00:18:47.144 ]' 00:18:47.144 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:47.144 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:47.144 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.144 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:47.144 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.144 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.144 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.144 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.403 11:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NDJjOGEwYTZlYTMzMGFhY2JlNjg2MzMyNjY0ZGY5MzJmYTQ0NTQ3Y2E4Nzg4MmZl9AEgQA==: --dhchap-ctrl-secret DHHC-1:01:YTFlMWUxZWI3MTI0NTEwODJhNDk2NWFkNmNhM2I3MTkCo0Yr: 00:18:47.971 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.971 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.971 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:47.971 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.971 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.971 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.971 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:47.971 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:47.971 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:47.971 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:18:47.971 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:47.971 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:47.971 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:47.971 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:47.971 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.971 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:47.971 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.971 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.971 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.971 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:47.971 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:48.540 00:18:48.540 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:48.540 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:48.540 11:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.798 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.798 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.798 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.798 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.798 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.798 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:48.798 { 00:18:48.798 "cntlid": 47, 00:18:48.798 "qid": 0, 00:18:48.798 "state": "enabled", 00:18:48.798 "thread": "nvmf_tgt_poll_group_000", 00:18:48.798 "listen_address": { 00:18:48.798 "trtype": "TCP", 00:18:48.798 "adrfam": "IPv4", 00:18:48.798 "traddr": "10.0.0.2", 00:18:48.798 "trsvcid": "4420" 00:18:48.798 }, 00:18:48.798 "peer_address": { 00:18:48.798 "trtype": "TCP", 00:18:48.798 "adrfam": "IPv4", 00:18:48.798 "traddr": "10.0.0.1", 00:18:48.798 "trsvcid": "52286" 00:18:48.798 }, 00:18:48.798 "auth": { 00:18:48.798 "state": "completed", 00:18:48.798 "digest": "sha256", 00:18:48.798 "dhgroup": "ffdhe8192" 00:18:48.798 } 00:18:48.798 } 00:18:48.798 ]' 00:18:48.798 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:48.798 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:48.798 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:48.798 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:48.798 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:48.799 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.799 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.799 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.058 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTBmZjViNWJjNGUwZDgwY2E0MzExNzI3YjdlZTY5NWFlNDQ2YTg3OTQyYTI2YmZkNjMxMmExZjhlYmM5NjVlMzTaNeM=: 00:18:49.628 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.628 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:49.628 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.628 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.628 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.628 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:49.628 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:49.628 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:49.628 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:49.628 11:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:49.888 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:18:49.888 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:49.888 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:49.888 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:49.888 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:49.888 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.888 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.888 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.888 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.888 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.888 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.888 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.888 00:18:50.147 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:50.147 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:50.147 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.147 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.147 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.147 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.147 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.147 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.147 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:50.147 { 00:18:50.147 "cntlid": 49, 00:18:50.147 "qid": 0, 00:18:50.147 "state": "enabled", 00:18:50.147 "thread": "nvmf_tgt_poll_group_000", 00:18:50.147 "listen_address": { 00:18:50.147 "trtype": "TCP", 00:18:50.147 "adrfam": "IPv4", 00:18:50.147 "traddr": "10.0.0.2", 00:18:50.147 "trsvcid": "4420" 00:18:50.147 }, 00:18:50.147 "peer_address": { 00:18:50.147 "trtype": "TCP", 00:18:50.147 "adrfam": "IPv4", 00:18:50.147 "traddr": "10.0.0.1", 00:18:50.147 "trsvcid": "52316" 00:18:50.147 }, 00:18:50.147 "auth": { 00:18:50.147 "state": "completed", 00:18:50.147 "digest": "sha384", 00:18:50.147 "dhgroup": "null" 00:18:50.147 } 00:18:50.147 } 00:18:50.147 ]' 00:18:50.147 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:50.147 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:50.147 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:50.406 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:50.406 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:50.406 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.406 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.406 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.406 11:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWRlOTE5OWU5ZmZhZDBkNmJmNmJiNzI0ZWM5YWJlZTU0ODU4M2NlZmJhYjA1YjUxpvK1Zg==: --dhchap-ctrl-secret DHHC-1:03:OGZjZWE0OWJhMGZjOGI3NWEzZDc3MDcyZTNhZDMyZTZmNWIwMWUxMzRlZThkMTlhMTczNjJlNTQ2NWNkZDNlZAhZeCk=: 00:18:50.975 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.975 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.976 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:50.976 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.976 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.976 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.976 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:50.976 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:50.976 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:51.235 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:18:51.235 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:51.235 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:51.235 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:51.235 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:51.235 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.235 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.235 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.235 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.235 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.236 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.236 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.495 00:18:51.495 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:51.495 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:51.495 11:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.755 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.755 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.755 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.755 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.755 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.755 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:51.755 { 00:18:51.755 "cntlid": 51, 00:18:51.755 "qid": 0, 00:18:51.755 "state": "enabled", 00:18:51.755 "thread": "nvmf_tgt_poll_group_000", 00:18:51.755 "listen_address": { 00:18:51.755 "trtype": "TCP", 00:18:51.755 "adrfam": "IPv4", 00:18:51.755 "traddr": "10.0.0.2", 00:18:51.755 "trsvcid": "4420" 00:18:51.755 }, 00:18:51.755 "peer_address": { 00:18:51.755 "trtype": "TCP", 00:18:51.755 "adrfam": "IPv4", 00:18:51.755 "traddr": "10.0.0.1", 00:18:51.755 "trsvcid": "52348" 00:18:51.755 }, 00:18:51.755 "auth": { 00:18:51.755 "state": "completed", 00:18:51.755 "digest": "sha384", 00:18:51.755 "dhgroup": "null" 00:18:51.755 } 00:18:51.755 } 00:18:51.755 ]' 00:18:51.755 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:51.755 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:51.755 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:51.755 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:51.755 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:51.755 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.755 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.755 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.015 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MWNmZTVkMjA0NGNkNzBlNTY3YTczYzM0ZTFkMDhjY2MX3hQ5: --dhchap-ctrl-secret DHHC-1:02:ODQ5ZjliYmQ2MDBkMWJmYjlkZTdkODg1NDliNjhkYzdhMTI0NTcyODdlZDNmNWY0LdJ6lA==: 00:18:52.583 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.583 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:52.583 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.583 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.583 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.583 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:52.584 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:52.584 11:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:52.584 11:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:18:52.584 11:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:52.584 11:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:52.584 11:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:52.584 11:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:52.584 11:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.584 11:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.584 11:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.584 11:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.584 11:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.584 11:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.584 11:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.843 00:18:52.843 11:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:52.843 11:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:52.843 11:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.103 11:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.103 11:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.103 11:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.103 11:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.103 11:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.103 11:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:53.103 { 00:18:53.103 "cntlid": 53, 00:18:53.103 "qid": 0, 00:18:53.103 "state": "enabled", 00:18:53.103 "thread": "nvmf_tgt_poll_group_000", 00:18:53.104 "listen_address": { 00:18:53.104 "trtype": "TCP", 00:18:53.104 "adrfam": "IPv4", 00:18:53.104 "traddr": "10.0.0.2", 00:18:53.104 "trsvcid": "4420" 00:18:53.104 }, 00:18:53.104 "peer_address": { 00:18:53.104 "trtype": "TCP", 00:18:53.104 "adrfam": "IPv4", 00:18:53.104 "traddr": "10.0.0.1", 00:18:53.104 "trsvcid": "52370" 00:18:53.104 }, 00:18:53.104 "auth": { 00:18:53.104 "state": "completed", 00:18:53.104 "digest": "sha384", 00:18:53.104 "dhgroup": "null" 00:18:53.104 } 00:18:53.104 } 00:18:53.104 ]' 00:18:53.104 11:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:53.104 11:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:53.104 11:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:53.104 11:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:53.104 11:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:53.363 11:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.363 11:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.363 11:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.363 11:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NDJjOGEwYTZlYTMzMGFhY2JlNjg2MzMyNjY0ZGY5MzJmYTQ0NTQ3Y2E4Nzg4MmZl9AEgQA==: --dhchap-ctrl-secret DHHC-1:01:YTFlMWUxZWI3MTI0NTEwODJhNDk2NWFkNmNhM2I3MTkCo0Yr: 00:18:54.003 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.003 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:54.003 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.003 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.003 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.003 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:54.003 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:54.003 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:54.263 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:18:54.263 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:54.263 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:54.263 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:54.263 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:54.263 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.263 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:54.263 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.263 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.263 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.263 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:54.263 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:54.263 00:18:54.263 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:54.263 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:54.263 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.523 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.523 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.523 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.523 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.523 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.523 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:54.523 { 00:18:54.523 "cntlid": 55, 00:18:54.523 "qid": 0, 00:18:54.523 "state": "enabled", 00:18:54.523 "thread": "nvmf_tgt_poll_group_000", 00:18:54.523 "listen_address": { 00:18:54.523 "trtype": "TCP", 00:18:54.523 "adrfam": "IPv4", 00:18:54.523 "traddr": "10.0.0.2", 00:18:54.523 "trsvcid": "4420" 00:18:54.523 }, 00:18:54.523 "peer_address": { 00:18:54.523 "trtype": "TCP", 00:18:54.523 "adrfam": "IPv4", 00:18:54.523 "traddr": "10.0.0.1", 00:18:54.523 "trsvcid": "52408" 00:18:54.523 }, 00:18:54.523 "auth": { 00:18:54.523 "state": "completed", 00:18:54.523 "digest": "sha384", 00:18:54.523 "dhgroup": "null" 00:18:54.523 } 00:18:54.523 } 00:18:54.523 ]' 00:18:54.523 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:54.523 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:54.523 11:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:54.782 11:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:54.782 11:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:54.782 11:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.782 11:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.782 11:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.782 11:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTBmZjViNWJjNGUwZDgwY2E0MzExNzI3YjdlZTY5NWFlNDQ2YTg3OTQyYTI2YmZkNjMxMmExZjhlYmM5NjVlMzTaNeM=: 00:18:55.351 11:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.351 11:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:55.351 11:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.351 11:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.351 11:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.351 11:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:55.351 11:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:55.351 11:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:55.351 11:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:55.612 11:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:18:55.612 11:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:55.612 11:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:55.612 11:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:55.612 11:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:55.612 11:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.612 11:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.612 11:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.612 11:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.612 11:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.612 11:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.612 11:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.873 00:18:55.873 11:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:55.873 11:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:55.873 11:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.134 11:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.134 11:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.134 11:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.134 11:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.134 11:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.134 11:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:56.134 { 00:18:56.134 "cntlid": 57, 00:18:56.134 "qid": 0, 00:18:56.134 "state": "enabled", 00:18:56.134 "thread": "nvmf_tgt_poll_group_000", 00:18:56.134 "listen_address": { 00:18:56.134 "trtype": "TCP", 00:18:56.134 "adrfam": "IPv4", 00:18:56.134 "traddr": "10.0.0.2", 00:18:56.134 "trsvcid": "4420" 00:18:56.134 }, 00:18:56.134 "peer_address": { 00:18:56.134 "trtype": "TCP", 00:18:56.134 "adrfam": "IPv4", 00:18:56.134 "traddr": "10.0.0.1", 00:18:56.134 "trsvcid": "52432" 00:18:56.134 }, 00:18:56.134 "auth": { 00:18:56.134 "state": "completed", 00:18:56.134 "digest": "sha384", 00:18:56.134 "dhgroup": "ffdhe2048" 00:18:56.134 } 00:18:56.134 } 00:18:56.134 ]' 00:18:56.134 11:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:56.134 11:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:56.134 11:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:56.134 11:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:56.134 11:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:56.134 11:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.134 11:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.134 11:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.394 11:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWRlOTE5OWU5ZmZhZDBkNmJmNmJiNzI0ZWM5YWJlZTU0ODU4M2NlZmJhYjA1YjUxpvK1Zg==: --dhchap-ctrl-secret DHHC-1:03:OGZjZWE0OWJhMGZjOGI3NWEzZDc3MDcyZTNhZDMyZTZmNWIwMWUxMzRlZThkMTlhMTczNjJlNTQ2NWNkZDNlZAhZeCk=: 00:18:56.964 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.964 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:56.964 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.964 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.964 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.964 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:56.964 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:56.964 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:56.964 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:18:56.964 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.964 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:56.964 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:56.964 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:56.964 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.964 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.964 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.964 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.964 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.964 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.964 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.224 00:18:57.224 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:57.224 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:57.224 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.484 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.484 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.484 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.484 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.484 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.484 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:57.484 { 00:18:57.484 "cntlid": 59, 00:18:57.484 "qid": 0, 00:18:57.484 "state": "enabled", 00:18:57.484 "thread": "nvmf_tgt_poll_group_000", 00:18:57.484 "listen_address": { 00:18:57.484 "trtype": "TCP", 00:18:57.484 "adrfam": "IPv4", 00:18:57.484 "traddr": "10.0.0.2", 00:18:57.484 "trsvcid": "4420" 00:18:57.484 }, 00:18:57.484 "peer_address": { 00:18:57.484 "trtype": "TCP", 00:18:57.484 "adrfam": "IPv4", 00:18:57.484 "traddr": "10.0.0.1", 00:18:57.484 "trsvcid": "37676" 00:18:57.484 }, 00:18:57.484 "auth": { 00:18:57.484 "state": "completed", 00:18:57.484 "digest": "sha384", 00:18:57.484 "dhgroup": "ffdhe2048" 00:18:57.484 } 00:18:57.484 } 00:18:57.484 ]' 00:18:57.484 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:57.484 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:57.484 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:57.484 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:57.484 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.745 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.745 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.745 11:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.746 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MWNmZTVkMjA0NGNkNzBlNTY3YTczYzM0ZTFkMDhjY2MX3hQ5: --dhchap-ctrl-secret DHHC-1:02:ODQ5ZjliYmQ2MDBkMWJmYjlkZTdkODg1NDliNjhkYzdhMTI0NTcyODdlZDNmNWY0LdJ6lA==: 00:18:58.316 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.316 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.316 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:58.316 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.316 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.316 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.316 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.316 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:58.316 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:58.576 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:18:58.576 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.576 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:58.576 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:58.576 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:58.576 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.576 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.576 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.576 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.576 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.576 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.576 11:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.836 00:18:58.836 11:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:58.836 11:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.836 11:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:58.836 11:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.836 11:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.836 11:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.836 11:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.097 11:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.097 11:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:59.097 { 00:18:59.097 "cntlid": 61, 00:18:59.097 "qid": 0, 00:18:59.097 "state": "enabled", 00:18:59.097 "thread": "nvmf_tgt_poll_group_000", 00:18:59.097 "listen_address": { 00:18:59.097 "trtype": "TCP", 00:18:59.097 "adrfam": "IPv4", 00:18:59.097 "traddr": "10.0.0.2", 00:18:59.097 "trsvcid": "4420" 00:18:59.097 }, 00:18:59.097 "peer_address": { 00:18:59.097 "trtype": "TCP", 00:18:59.097 "adrfam": "IPv4", 00:18:59.097 "traddr": "10.0.0.1", 00:18:59.097 "trsvcid": "37694" 00:18:59.097 }, 00:18:59.097 "auth": { 00:18:59.097 "state": "completed", 00:18:59.097 "digest": "sha384", 00:18:59.097 "dhgroup": "ffdhe2048" 00:18:59.097 } 00:18:59.097 } 00:18:59.097 ]' 00:18:59.097 11:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:59.097 11:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:59.097 11:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:59.097 11:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:59.097 11:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:59.097 11:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.097 11:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.097 11:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.358 11:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NDJjOGEwYTZlYTMzMGFhY2JlNjg2MzMyNjY0ZGY5MzJmYTQ0NTQ3Y2E4Nzg4MmZl9AEgQA==: --dhchap-ctrl-secret DHHC-1:01:YTFlMWUxZWI3MTI0NTEwODJhNDk2NWFkNmNhM2I3MTkCo0Yr: 00:18:59.927 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.927 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:59.927 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.927 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.927 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.927 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:59.927 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:59.927 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:59.928 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:18:59.928 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:59.928 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:59.928 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:59.928 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:59.928 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.928 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:59.928 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.928 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.928 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.928 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:59.928 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:00.187 00:19:00.187 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:00.187 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:00.187 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.447 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.447 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.447 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.447 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.447 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.447 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:00.447 { 00:19:00.447 "cntlid": 63, 00:19:00.447 "qid": 0, 00:19:00.447 "state": "enabled", 00:19:00.447 "thread": "nvmf_tgt_poll_group_000", 00:19:00.447 "listen_address": { 00:19:00.447 "trtype": "TCP", 00:19:00.447 "adrfam": "IPv4", 00:19:00.447 "traddr": "10.0.0.2", 00:19:00.447 "trsvcid": "4420" 00:19:00.447 }, 00:19:00.447 "peer_address": { 00:19:00.447 "trtype": "TCP", 00:19:00.447 "adrfam": "IPv4", 00:19:00.447 "traddr": "10.0.0.1", 00:19:00.447 "trsvcid": "37714" 00:19:00.447 }, 00:19:00.447 "auth": { 00:19:00.447 "state": "completed", 00:19:00.447 "digest": "sha384", 00:19:00.447 "dhgroup": "ffdhe2048" 00:19:00.447 } 00:19:00.447 } 00:19:00.447 ]' 00:19:00.447 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:00.447 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:00.447 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:00.447 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:00.447 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.447 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.447 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.447 11:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.707 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTBmZjViNWJjNGUwZDgwY2E0MzExNzI3YjdlZTY5NWFlNDQ2YTg3OTQyYTI2YmZkNjMxMmExZjhlYmM5NjVlMzTaNeM=: 00:19:01.278 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.278 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:01.278 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.278 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.278 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.278 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:01.278 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:01.278 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:01.278 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:01.538 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:19:01.538 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:01.538 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:01.538 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:01.538 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:01.538 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.538 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.538 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.538 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.538 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.538 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.538 11:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.799 00:19:01.799 11:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:01.799 11:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:01.799 11:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.799 11:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.799 11:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.799 11:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.799 11:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.799 11:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.799 11:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:01.799 { 00:19:01.799 "cntlid": 65, 00:19:01.799 "qid": 0, 00:19:01.799 "state": "enabled", 00:19:01.799 "thread": "nvmf_tgt_poll_group_000", 00:19:01.799 "listen_address": { 00:19:01.799 "trtype": "TCP", 00:19:01.799 "adrfam": "IPv4", 00:19:01.799 "traddr": "10.0.0.2", 00:19:01.799 "trsvcid": "4420" 00:19:01.799 }, 00:19:01.799 "peer_address": { 00:19:01.799 "trtype": "TCP", 00:19:01.799 "adrfam": "IPv4", 00:19:01.799 "traddr": "10.0.0.1", 00:19:01.799 "trsvcid": "37744" 00:19:01.799 }, 00:19:01.799 "auth": { 00:19:01.799 "state": "completed", 00:19:01.799 "digest": "sha384", 00:19:01.799 "dhgroup": "ffdhe3072" 00:19:01.799 } 00:19:01.799 } 00:19:01.799 ]' 00:19:01.799 11:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:02.059 11:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:02.059 11:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:02.059 11:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:02.059 11:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:02.059 11:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.059 11:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.059 11:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.318 11:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWRlOTE5OWU5ZmZhZDBkNmJmNmJiNzI0ZWM5YWJlZTU0ODU4M2NlZmJhYjA1YjUxpvK1Zg==: --dhchap-ctrl-secret DHHC-1:03:OGZjZWE0OWJhMGZjOGI3NWEzZDc3MDcyZTNhZDMyZTZmNWIwMWUxMzRlZThkMTlhMTczNjJlNTQ2NWNkZDNlZAhZeCk=: 00:19:02.888 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.888 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:02.888 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.888 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.888 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.888 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:02.888 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:02.888 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:02.888 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:19:02.888 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:02.888 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:02.888 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:02.888 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:02.888 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.888 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.888 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.888 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.888 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.888 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.888 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.148 00:19:03.148 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:03.148 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.148 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.408 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.408 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.408 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.408 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.408 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.408 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:03.408 { 00:19:03.408 "cntlid": 67, 00:19:03.408 "qid": 0, 00:19:03.408 "state": "enabled", 00:19:03.408 "thread": "nvmf_tgt_poll_group_000", 00:19:03.408 "listen_address": { 00:19:03.408 "trtype": "TCP", 00:19:03.408 "adrfam": "IPv4", 00:19:03.408 "traddr": "10.0.0.2", 00:19:03.408 "trsvcid": "4420" 00:19:03.408 }, 00:19:03.408 "peer_address": { 00:19:03.408 "trtype": "TCP", 00:19:03.408 "adrfam": "IPv4", 00:19:03.408 "traddr": "10.0.0.1", 00:19:03.408 "trsvcid": "37774" 00:19:03.408 }, 00:19:03.408 "auth": { 00:19:03.408 "state": "completed", 00:19:03.408 "digest": "sha384", 00:19:03.408 "dhgroup": "ffdhe3072" 00:19:03.408 } 00:19:03.408 } 00:19:03.408 ]' 00:19:03.408 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:03.408 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:03.408 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.408 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:03.408 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.668 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.668 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.668 11:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.668 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MWNmZTVkMjA0NGNkNzBlNTY3YTczYzM0ZTFkMDhjY2MX3hQ5: --dhchap-ctrl-secret DHHC-1:02:ODQ5ZjliYmQ2MDBkMWJmYjlkZTdkODg1NDliNjhkYzdhMTI0NTcyODdlZDNmNWY0LdJ6lA==: 00:19:04.238 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.238 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:04.238 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.238 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.238 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.238 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.238 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:04.238 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:04.498 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:19:04.498 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.498 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:04.498 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:04.498 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:04.498 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.498 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.498 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.498 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.498 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.498 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.498 11:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.758 00:19:04.758 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:04.758 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:04.758 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.758 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.758 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.758 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.758 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.758 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.017 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.017 { 00:19:05.017 "cntlid": 69, 00:19:05.017 "qid": 0, 00:19:05.017 "state": "enabled", 00:19:05.017 "thread": "nvmf_tgt_poll_group_000", 00:19:05.017 "listen_address": { 00:19:05.017 "trtype": "TCP", 00:19:05.017 "adrfam": "IPv4", 00:19:05.017 "traddr": "10.0.0.2", 00:19:05.017 "trsvcid": "4420" 00:19:05.017 }, 00:19:05.017 "peer_address": { 00:19:05.017 "trtype": "TCP", 00:19:05.017 "adrfam": "IPv4", 00:19:05.017 "traddr": "10.0.0.1", 00:19:05.017 "trsvcid": "37812" 00:19:05.017 }, 00:19:05.017 "auth": { 00:19:05.017 "state": "completed", 00:19:05.017 "digest": "sha384", 00:19:05.017 "dhgroup": "ffdhe3072" 00:19:05.017 } 00:19:05.017 } 00:19:05.017 ]' 00:19:05.017 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.017 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:05.017 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.017 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:05.017 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.017 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.017 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.017 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.277 11:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NDJjOGEwYTZlYTMzMGFhY2JlNjg2MzMyNjY0ZGY5MzJmYTQ0NTQ3Y2E4Nzg4MmZl9AEgQA==: --dhchap-ctrl-secret DHHC-1:01:YTFlMWUxZWI3MTI0NTEwODJhNDk2NWFkNmNhM2I3MTkCo0Yr: 00:19:05.846 11:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.846 11:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:05.846 11:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.846 11:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.846 11:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.846 11:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:05.847 11:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:05.847 11:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:05.847 11:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:19:05.847 11:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:05.847 11:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:05.847 11:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:05.847 11:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:05.847 11:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.847 11:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:05.847 11:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.847 11:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.847 11:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.847 11:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:05.847 11:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:06.106 00:19:06.106 11:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:06.106 11:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:06.106 11:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.366 11:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.366 11:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.366 11:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.366 11:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.366 11:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.366 11:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:06.366 { 00:19:06.366 "cntlid": 71, 00:19:06.366 "qid": 0, 00:19:06.366 "state": "enabled", 00:19:06.366 "thread": "nvmf_tgt_poll_group_000", 00:19:06.366 "listen_address": { 00:19:06.366 "trtype": "TCP", 00:19:06.366 "adrfam": "IPv4", 00:19:06.366 "traddr": "10.0.0.2", 00:19:06.366 "trsvcid": "4420" 00:19:06.366 }, 00:19:06.366 "peer_address": { 00:19:06.366 "trtype": "TCP", 00:19:06.366 "adrfam": "IPv4", 00:19:06.366 "traddr": "10.0.0.1", 00:19:06.366 "trsvcid": "37832" 00:19:06.366 }, 00:19:06.366 "auth": { 00:19:06.366 "state": "completed", 00:19:06.366 "digest": "sha384", 00:19:06.366 "dhgroup": "ffdhe3072" 00:19:06.366 } 00:19:06.366 } 00:19:06.366 ]' 00:19:06.366 11:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:06.366 11:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:06.366 11:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:06.366 11:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:06.366 11:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:06.366 11:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.366 11:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.366 11:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.626 11:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTBmZjViNWJjNGUwZDgwY2E0MzExNzI3YjdlZTY5NWFlNDQ2YTg3OTQyYTI2YmZkNjMxMmExZjhlYmM5NjVlMzTaNeM=: 00:19:07.196 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.196 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.196 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:07.196 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.196 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.196 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.196 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:07.196 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:07.196 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:07.196 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:07.456 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:19:07.456 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:07.456 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:07.456 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:07.456 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:07.456 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.456 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.456 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.456 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.456 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.456 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.456 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.758 00:19:07.758 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:07.758 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.758 11:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.758 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.758 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.758 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.758 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.758 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.758 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:07.758 { 00:19:07.758 "cntlid": 73, 00:19:07.758 "qid": 0, 00:19:07.758 "state": "enabled", 00:19:07.758 "thread": "nvmf_tgt_poll_group_000", 00:19:07.758 "listen_address": { 00:19:07.758 "trtype": "TCP", 00:19:07.758 "adrfam": "IPv4", 00:19:07.758 "traddr": "10.0.0.2", 00:19:07.758 "trsvcid": "4420" 00:19:07.758 }, 00:19:07.758 "peer_address": { 00:19:07.758 "trtype": "TCP", 00:19:07.758 "adrfam": "IPv4", 00:19:07.758 "traddr": "10.0.0.1", 00:19:07.758 "trsvcid": "38510" 00:19:07.758 }, 00:19:07.758 "auth": { 00:19:07.758 "state": "completed", 00:19:07.758 "digest": "sha384", 00:19:07.758 "dhgroup": "ffdhe4096" 00:19:07.758 } 00:19:07.758 } 00:19:07.758 ]' 00:19:07.758 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.758 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:07.758 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:08.041 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:08.041 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:08.041 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.041 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.041 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.041 11:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWRlOTE5OWU5ZmZhZDBkNmJmNmJiNzI0ZWM5YWJlZTU0ODU4M2NlZmJhYjA1YjUxpvK1Zg==: --dhchap-ctrl-secret DHHC-1:03:OGZjZWE0OWJhMGZjOGI3NWEzZDc3MDcyZTNhZDMyZTZmNWIwMWUxMzRlZThkMTlhMTczNjJlNTQ2NWNkZDNlZAhZeCk=: 00:19:08.611 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.611 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:08.611 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.611 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.611 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.611 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:08.611 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:08.611 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:08.871 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:19:08.871 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:08.871 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:08.871 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:08.871 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:08.871 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.871 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.871 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.871 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.871 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.871 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.871 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.131 00:19:09.131 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.131 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.131 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.391 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.391 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.391 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.391 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.391 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.391 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:09.391 { 00:19:09.391 "cntlid": 75, 00:19:09.391 "qid": 0, 00:19:09.391 "state": "enabled", 00:19:09.391 "thread": "nvmf_tgt_poll_group_000", 00:19:09.391 "listen_address": { 00:19:09.391 "trtype": "TCP", 00:19:09.391 "adrfam": "IPv4", 00:19:09.391 "traddr": "10.0.0.2", 00:19:09.391 "trsvcid": "4420" 00:19:09.391 }, 00:19:09.391 "peer_address": { 00:19:09.391 "trtype": "TCP", 00:19:09.391 "adrfam": "IPv4", 00:19:09.391 "traddr": "10.0.0.1", 00:19:09.391 "trsvcid": "38536" 00:19:09.391 }, 00:19:09.391 "auth": { 00:19:09.391 "state": "completed", 00:19:09.391 "digest": "sha384", 00:19:09.391 "dhgroup": "ffdhe4096" 00:19:09.391 } 00:19:09.391 } 00:19:09.391 ]' 00:19:09.391 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:09.391 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:09.391 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:09.391 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:09.391 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:09.391 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.391 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.391 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.651 11:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MWNmZTVkMjA0NGNkNzBlNTY3YTczYzM0ZTFkMDhjY2MX3hQ5: --dhchap-ctrl-secret DHHC-1:02:ODQ5ZjliYmQ2MDBkMWJmYjlkZTdkODg1NDliNjhkYzdhMTI0NTcyODdlZDNmNWY0LdJ6lA==: 00:19:10.220 11:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.220 11:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:10.220 11:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.220 11:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.220 11:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.220 11:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.220 11:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:10.220 11:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:10.220 11:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:19:10.220 11:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:10.220 11:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:10.220 11:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:10.220 11:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:10.220 11:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.220 11:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.220 11:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.220 11:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.220 11:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.220 11:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.220 11:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.481 00:19:10.741 11:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:10.741 11:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:10.741 11:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.741 11:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.741 11:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.741 11:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.741 11:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.741 11:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.741 11:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:10.741 { 00:19:10.741 "cntlid": 77, 00:19:10.741 "qid": 0, 00:19:10.741 "state": "enabled", 00:19:10.741 "thread": "nvmf_tgt_poll_group_000", 00:19:10.741 "listen_address": { 00:19:10.741 "trtype": "TCP", 00:19:10.741 "adrfam": "IPv4", 00:19:10.741 "traddr": "10.0.0.2", 00:19:10.741 "trsvcid": "4420" 00:19:10.741 }, 00:19:10.741 "peer_address": { 00:19:10.741 "trtype": "TCP", 00:19:10.741 "adrfam": "IPv4", 00:19:10.741 "traddr": "10.0.0.1", 00:19:10.741 "trsvcid": "38564" 00:19:10.741 }, 00:19:10.741 "auth": { 00:19:10.741 "state": "completed", 00:19:10.741 "digest": "sha384", 00:19:10.741 "dhgroup": "ffdhe4096" 00:19:10.741 } 00:19:10.741 } 00:19:10.741 ]' 00:19:10.741 11:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:10.741 11:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:10.741 11:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.001 11:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:11.001 11:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.001 11:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.001 11:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.001 11:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.001 11:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NDJjOGEwYTZlYTMzMGFhY2JlNjg2MzMyNjY0ZGY5MzJmYTQ0NTQ3Y2E4Nzg4MmZl9AEgQA==: --dhchap-ctrl-secret DHHC-1:01:YTFlMWUxZWI3MTI0NTEwODJhNDk2NWFkNmNhM2I3MTkCo0Yr: 00:19:11.593 11:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.593 11:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:11.593 11:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.593 11:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.593 11:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.593 11:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:11.593 11:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:11.593 11:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:11.853 11:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:19:11.853 11:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:11.853 11:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:11.853 11:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:11.853 11:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:11.853 11:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.853 11:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:11.853 11:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.853 11:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.853 11:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.853 11:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:11.853 11:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:12.112 00:19:12.112 11:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:12.112 11:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.112 11:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.372 11:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.372 11:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.372 11:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.372 11:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.372 11:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.372 11:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:12.372 { 00:19:12.372 "cntlid": 79, 00:19:12.372 "qid": 0, 00:19:12.372 "state": "enabled", 00:19:12.372 "thread": "nvmf_tgt_poll_group_000", 00:19:12.372 "listen_address": { 00:19:12.372 "trtype": "TCP", 00:19:12.372 "adrfam": "IPv4", 00:19:12.372 "traddr": "10.0.0.2", 00:19:12.372 "trsvcid": "4420" 00:19:12.372 }, 00:19:12.372 "peer_address": { 00:19:12.372 "trtype": "TCP", 00:19:12.372 "adrfam": "IPv4", 00:19:12.372 "traddr": "10.0.0.1", 00:19:12.372 "trsvcid": "38600" 00:19:12.372 }, 00:19:12.372 "auth": { 00:19:12.372 "state": "completed", 00:19:12.372 "digest": "sha384", 00:19:12.372 "dhgroup": "ffdhe4096" 00:19:12.372 } 00:19:12.372 } 00:19:12.372 ]' 00:19:12.372 11:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:12.372 11:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:12.372 11:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:12.372 11:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:12.372 11:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:12.372 11:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.372 11:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.372 11:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.632 11:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTBmZjViNWJjNGUwZDgwY2E0MzExNzI3YjdlZTY5NWFlNDQ2YTg3OTQyYTI2YmZkNjMxMmExZjhlYmM5NjVlMzTaNeM=: 00:19:13.202 11:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.202 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.202 11:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:13.202 11:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.202 11:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.202 11:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.202 11:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:13.202 11:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:13.202 11:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:13.202 11:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:13.202 11:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:19:13.202 11:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.202 11:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:13.202 11:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:13.202 11:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:13.202 11:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.202 11:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.202 11:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.202 11:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.202 11:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.202 11:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.202 11:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.772 00:19:13.772 11:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:13.772 11:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:13.772 11:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.772 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.772 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.772 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.772 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.772 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.772 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:13.772 { 00:19:13.772 "cntlid": 81, 00:19:13.772 "qid": 0, 00:19:13.772 "state": "enabled", 00:19:13.772 "thread": "nvmf_tgt_poll_group_000", 00:19:13.772 "listen_address": { 00:19:13.772 "trtype": "TCP", 00:19:13.772 "adrfam": "IPv4", 00:19:13.772 "traddr": "10.0.0.2", 00:19:13.772 "trsvcid": "4420" 00:19:13.772 }, 00:19:13.772 "peer_address": { 00:19:13.772 "trtype": "TCP", 00:19:13.772 "adrfam": "IPv4", 00:19:13.772 "traddr": "10.0.0.1", 00:19:13.772 "trsvcid": "38638" 00:19:13.772 }, 00:19:13.772 "auth": { 00:19:13.772 "state": "completed", 00:19:13.772 "digest": "sha384", 00:19:13.772 "dhgroup": "ffdhe6144" 00:19:13.772 } 00:19:13.772 } 00:19:13.772 ]' 00:19:13.772 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:13.772 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:13.772 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:13.772 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:13.772 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:14.031 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.031 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.031 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.031 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWRlOTE5OWU5ZmZhZDBkNmJmNmJiNzI0ZWM5YWJlZTU0ODU4M2NlZmJhYjA1YjUxpvK1Zg==: --dhchap-ctrl-secret DHHC-1:03:OGZjZWE0OWJhMGZjOGI3NWEzZDc3MDcyZTNhZDMyZTZmNWIwMWUxMzRlZThkMTlhMTczNjJlNTQ2NWNkZDNlZAhZeCk=: 00:19:14.599 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.599 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:14.599 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.599 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.599 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.599 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:14.599 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:14.599 11:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:14.860 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:19:14.860 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:14.860 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:14.860 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:14.860 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:14.860 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.860 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.860 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.860 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.860 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.860 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.860 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.120 00:19:15.120 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.120 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:15.120 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.380 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.380 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.380 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.380 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.380 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.380 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.380 { 00:19:15.380 "cntlid": 83, 00:19:15.380 "qid": 0, 00:19:15.380 "state": "enabled", 00:19:15.380 "thread": "nvmf_tgt_poll_group_000", 00:19:15.380 "listen_address": { 00:19:15.380 "trtype": "TCP", 00:19:15.380 "adrfam": "IPv4", 00:19:15.380 "traddr": "10.0.0.2", 00:19:15.380 "trsvcid": "4420" 00:19:15.380 }, 00:19:15.380 "peer_address": { 00:19:15.380 "trtype": "TCP", 00:19:15.380 "adrfam": "IPv4", 00:19:15.380 "traddr": "10.0.0.1", 00:19:15.380 "trsvcid": "38662" 00:19:15.380 }, 00:19:15.380 "auth": { 00:19:15.380 "state": "completed", 00:19:15.380 "digest": "sha384", 00:19:15.380 "dhgroup": "ffdhe6144" 00:19:15.380 } 00:19:15.380 } 00:19:15.380 ]' 00:19:15.380 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.380 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:15.380 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.380 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:15.380 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:15.380 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.380 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.380 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.640 11:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MWNmZTVkMjA0NGNkNzBlNTY3YTczYzM0ZTFkMDhjY2MX3hQ5: --dhchap-ctrl-secret DHHC-1:02:ODQ5ZjliYmQ2MDBkMWJmYjlkZTdkODg1NDliNjhkYzdhMTI0NTcyODdlZDNmNWY0LdJ6lA==: 00:19:16.211 11:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.211 11:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:16.211 11:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.211 11:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.211 11:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.211 11:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:16.211 11:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:16.211 11:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:16.211 11:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:19:16.211 11:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:16.211 11:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:16.211 11:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:16.211 11:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:16.211 11:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.211 11:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.211 11:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.211 11:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.211 11:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.211 11:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.211 11:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.781 00:19:16.781 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:16.781 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:16.781 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.781 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.781 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.781 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.781 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.781 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.781 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:16.781 { 00:19:16.781 "cntlid": 85, 00:19:16.781 "qid": 0, 00:19:16.781 "state": "enabled", 00:19:16.781 "thread": "nvmf_tgt_poll_group_000", 00:19:16.781 "listen_address": { 00:19:16.781 "trtype": "TCP", 00:19:16.781 "adrfam": "IPv4", 00:19:16.781 "traddr": "10.0.0.2", 00:19:16.781 "trsvcid": "4420" 00:19:16.781 }, 00:19:16.781 "peer_address": { 00:19:16.781 "trtype": "TCP", 00:19:16.781 "adrfam": "IPv4", 00:19:16.781 "traddr": "10.0.0.1", 00:19:16.781 "trsvcid": "38686" 00:19:16.781 }, 00:19:16.781 "auth": { 00:19:16.781 "state": "completed", 00:19:16.781 "digest": "sha384", 00:19:16.781 "dhgroup": "ffdhe6144" 00:19:16.781 } 00:19:16.781 } 00:19:16.781 ]' 00:19:16.781 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:16.781 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:16.781 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:17.041 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:17.041 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:17.041 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.041 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.041 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.041 11:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NDJjOGEwYTZlYTMzMGFhY2JlNjg2MzMyNjY0ZGY5MzJmYTQ0NTQ3Y2E4Nzg4MmZl9AEgQA==: --dhchap-ctrl-secret DHHC-1:01:YTFlMWUxZWI3MTI0NTEwODJhNDk2NWFkNmNhM2I3MTkCo0Yr: 00:19:17.611 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.611 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:17.611 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.611 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.611 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.611 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:17.611 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:17.611 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:17.872 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:19:17.872 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.872 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:17.872 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:17.872 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:17.872 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.872 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:17.872 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.872 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.872 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.872 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:17.872 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:18.132 00:19:18.132 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.132 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.132 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.401 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.401 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.401 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.401 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.401 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.401 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.401 { 00:19:18.401 "cntlid": 87, 00:19:18.401 "qid": 0, 00:19:18.401 "state": "enabled", 00:19:18.401 "thread": "nvmf_tgt_poll_group_000", 00:19:18.401 "listen_address": { 00:19:18.401 "trtype": "TCP", 00:19:18.401 "adrfam": "IPv4", 00:19:18.402 "traddr": "10.0.0.2", 00:19:18.402 "trsvcid": "4420" 00:19:18.402 }, 00:19:18.402 "peer_address": { 00:19:18.402 "trtype": "TCP", 00:19:18.402 "adrfam": "IPv4", 00:19:18.402 "traddr": "10.0.0.1", 00:19:18.402 "trsvcid": "33440" 00:19:18.402 }, 00:19:18.402 "auth": { 00:19:18.402 "state": "completed", 00:19:18.402 "digest": "sha384", 00:19:18.402 "dhgroup": "ffdhe6144" 00:19:18.402 } 00:19:18.403 } 00:19:18.403 ]' 00:19:18.403 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.403 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:18.403 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.403 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:18.403 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.403 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.403 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.403 11:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.666 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTBmZjViNWJjNGUwZDgwY2E0MzExNzI3YjdlZTY5NWFlNDQ2YTg3OTQyYTI2YmZkNjMxMmExZjhlYmM5NjVlMzTaNeM=: 00:19:19.235 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.235 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.235 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:19.235 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.235 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.235 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.235 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:19.235 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.235 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:19.235 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:19.496 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:19:19.496 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.496 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:19.496 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:19.496 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:19.496 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.496 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.496 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.496 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.496 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.496 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.496 11:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.065 00:19:20.065 11:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.066 11:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.066 11:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.066 11:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.066 11:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.066 11:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.066 11:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.066 11:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.066 11:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.066 { 00:19:20.066 "cntlid": 89, 00:19:20.066 "qid": 0, 00:19:20.066 "state": "enabled", 00:19:20.066 "thread": "nvmf_tgt_poll_group_000", 00:19:20.066 "listen_address": { 00:19:20.066 "trtype": "TCP", 00:19:20.066 "adrfam": "IPv4", 00:19:20.066 "traddr": "10.0.0.2", 00:19:20.066 "trsvcid": "4420" 00:19:20.066 }, 00:19:20.066 "peer_address": { 00:19:20.066 "trtype": "TCP", 00:19:20.066 "adrfam": "IPv4", 00:19:20.066 "traddr": "10.0.0.1", 00:19:20.066 "trsvcid": "33474" 00:19:20.066 }, 00:19:20.066 "auth": { 00:19:20.066 "state": "completed", 00:19:20.066 "digest": "sha384", 00:19:20.066 "dhgroup": "ffdhe8192" 00:19:20.066 } 00:19:20.066 } 00:19:20.066 ]' 00:19:20.066 11:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.066 11:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:20.066 11:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.066 11:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:20.066 11:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.326 11:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.326 11:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.326 11:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.326 11:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWRlOTE5OWU5ZmZhZDBkNmJmNmJiNzI0ZWM5YWJlZTU0ODU4M2NlZmJhYjA1YjUxpvK1Zg==: --dhchap-ctrl-secret DHHC-1:03:OGZjZWE0OWJhMGZjOGI3NWEzZDc3MDcyZTNhZDMyZTZmNWIwMWUxMzRlZThkMTlhMTczNjJlNTQ2NWNkZDNlZAhZeCk=: 00:19:20.897 11:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.897 11:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:20.897 11:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.897 11:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.897 11:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.897 11:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:20.897 11:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:20.897 11:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:21.157 11:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:19:21.157 11:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:21.157 11:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:21.157 11:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:21.157 11:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:21.157 11:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.157 11:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.157 11:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.157 11:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.157 11:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.157 11:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.157 11:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.727 00:19:21.727 11:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:21.727 11:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:21.727 11:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.727 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.727 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.727 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.727 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.727 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.727 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:21.727 { 00:19:21.727 "cntlid": 91, 00:19:21.727 "qid": 0, 00:19:21.727 "state": "enabled", 00:19:21.727 "thread": "nvmf_tgt_poll_group_000", 00:19:21.727 "listen_address": { 00:19:21.727 "trtype": "TCP", 00:19:21.727 "adrfam": "IPv4", 00:19:21.727 "traddr": "10.0.0.2", 00:19:21.727 "trsvcid": "4420" 00:19:21.727 }, 00:19:21.727 "peer_address": { 00:19:21.727 "trtype": "TCP", 00:19:21.727 "adrfam": "IPv4", 00:19:21.727 "traddr": "10.0.0.1", 00:19:21.727 "trsvcid": "33512" 00:19:21.727 }, 00:19:21.727 "auth": { 00:19:21.727 "state": "completed", 00:19:21.727 "digest": "sha384", 00:19:21.727 "dhgroup": "ffdhe8192" 00:19:21.727 } 00:19:21.727 } 00:19:21.727 ]' 00:19:21.727 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:21.727 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:21.989 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:21.989 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:21.989 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:21.989 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.989 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.989 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.989 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MWNmZTVkMjA0NGNkNzBlNTY3YTczYzM0ZTFkMDhjY2MX3hQ5: --dhchap-ctrl-secret DHHC-1:02:ODQ5ZjliYmQ2MDBkMWJmYjlkZTdkODg1NDliNjhkYzdhMTI0NTcyODdlZDNmNWY0LdJ6lA==: 00:19:22.593 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.593 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:22.593 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.593 11:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.593 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.593 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:22.593 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:22.593 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:22.854 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:19:22.854 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.854 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:22.854 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:22.854 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:22.854 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.854 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.854 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.854 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.854 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.854 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.854 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.423 00:19:23.423 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:23.423 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:23.423 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.423 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.423 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.423 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.423 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.423 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.423 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.423 { 00:19:23.423 "cntlid": 93, 00:19:23.423 "qid": 0, 00:19:23.423 "state": "enabled", 00:19:23.423 "thread": "nvmf_tgt_poll_group_000", 00:19:23.423 "listen_address": { 00:19:23.423 "trtype": "TCP", 00:19:23.423 "adrfam": "IPv4", 00:19:23.423 "traddr": "10.0.0.2", 00:19:23.423 "trsvcid": "4420" 00:19:23.423 }, 00:19:23.423 "peer_address": { 00:19:23.423 "trtype": "TCP", 00:19:23.423 "adrfam": "IPv4", 00:19:23.423 "traddr": "10.0.0.1", 00:19:23.423 "trsvcid": "33530" 00:19:23.423 }, 00:19:23.423 "auth": { 00:19:23.423 "state": "completed", 00:19:23.423 "digest": "sha384", 00:19:23.423 "dhgroup": "ffdhe8192" 00:19:23.423 } 00:19:23.423 } 00:19:23.423 ]' 00:19:23.423 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.424 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:23.424 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:23.682 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:23.682 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:23.682 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.682 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.682 11:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.682 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NDJjOGEwYTZlYTMzMGFhY2JlNjg2MzMyNjY0ZGY5MzJmYTQ0NTQ3Y2E4Nzg4MmZl9AEgQA==: --dhchap-ctrl-secret DHHC-1:01:YTFlMWUxZWI3MTI0NTEwODJhNDk2NWFkNmNhM2I3MTkCo0Yr: 00:19:24.250 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.250 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:24.250 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.250 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.250 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.250 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:24.250 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:24.250 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:24.509 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:19:24.509 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.509 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:24.509 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:24.509 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:24.509 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.509 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:24.509 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.509 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.509 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.509 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:24.509 11:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:25.077 00:19:25.077 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:25.077 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:25.077 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.077 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.077 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.077 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.077 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.337 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.337 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:25.337 { 00:19:25.337 "cntlid": 95, 00:19:25.337 "qid": 0, 00:19:25.337 "state": "enabled", 00:19:25.337 "thread": "nvmf_tgt_poll_group_000", 00:19:25.337 "listen_address": { 00:19:25.337 "trtype": "TCP", 00:19:25.337 "adrfam": "IPv4", 00:19:25.337 "traddr": "10.0.0.2", 00:19:25.337 "trsvcid": "4420" 00:19:25.337 }, 00:19:25.337 "peer_address": { 00:19:25.337 "trtype": "TCP", 00:19:25.337 "adrfam": "IPv4", 00:19:25.337 "traddr": "10.0.0.1", 00:19:25.337 "trsvcid": "33566" 00:19:25.337 }, 00:19:25.337 "auth": { 00:19:25.337 "state": "completed", 00:19:25.337 "digest": "sha384", 00:19:25.337 "dhgroup": "ffdhe8192" 00:19:25.337 } 00:19:25.337 } 00:19:25.337 ]' 00:19:25.337 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:25.337 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:25.337 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:25.337 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:25.337 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.337 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.337 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.337 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.597 11:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTBmZjViNWJjNGUwZDgwY2E0MzExNzI3YjdlZTY5NWFlNDQ2YTg3OTQyYTI2YmZkNjMxMmExZjhlYmM5NjVlMzTaNeM=: 00:19:26.166 11:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.166 11:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:26.166 11:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.166 11:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.166 11:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.166 11:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:26.166 11:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:26.166 11:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:26.166 11:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:26.166 11:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:26.166 11:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:19:26.166 11:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.166 11:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:26.166 11:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:26.166 11:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:26.166 11:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.166 11:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.166 11:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.166 11:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.166 11:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.166 11:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.166 11:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.426 00:19:26.426 11:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.426 11:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.426 11:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.685 11:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.685 11:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.685 11:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.685 11:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.685 11:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.685 11:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.685 { 00:19:26.685 "cntlid": 97, 00:19:26.685 "qid": 0, 00:19:26.685 "state": "enabled", 00:19:26.685 "thread": "nvmf_tgt_poll_group_000", 00:19:26.685 "listen_address": { 00:19:26.685 "trtype": "TCP", 00:19:26.685 "adrfam": "IPv4", 00:19:26.685 "traddr": "10.0.0.2", 00:19:26.685 "trsvcid": "4420" 00:19:26.685 }, 00:19:26.685 "peer_address": { 00:19:26.685 "trtype": "TCP", 00:19:26.685 "adrfam": "IPv4", 00:19:26.685 "traddr": "10.0.0.1", 00:19:26.685 "trsvcid": "33588" 00:19:26.685 }, 00:19:26.685 "auth": { 00:19:26.685 "state": "completed", 00:19:26.685 "digest": "sha512", 00:19:26.685 "dhgroup": "null" 00:19:26.685 } 00:19:26.685 } 00:19:26.685 ]' 00:19:26.685 11:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.685 11:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:26.685 11:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:26.685 11:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:26.685 11:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:26.685 11:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.685 11:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.685 11:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.945 11:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWRlOTE5OWU5ZmZhZDBkNmJmNmJiNzI0ZWM5YWJlZTU0ODU4M2NlZmJhYjA1YjUxpvK1Zg==: --dhchap-ctrl-secret DHHC-1:03:OGZjZWE0OWJhMGZjOGI3NWEzZDc3MDcyZTNhZDMyZTZmNWIwMWUxMzRlZThkMTlhMTczNjJlNTQ2NWNkZDNlZAhZeCk=: 00:19:27.514 11:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.514 11:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:27.514 11:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.514 11:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.514 11:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.514 11:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:27.514 11:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:27.514 11:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:27.775 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:19:27.775 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:27.775 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:27.775 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:27.775 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:27.775 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.775 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.775 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.775 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.775 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.775 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.775 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.035 00:19:28.035 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:28.035 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:28.035 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.035 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.035 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.035 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.035 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.035 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.035 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:28.035 { 00:19:28.035 "cntlid": 99, 00:19:28.035 "qid": 0, 00:19:28.035 "state": "enabled", 00:19:28.035 "thread": "nvmf_tgt_poll_group_000", 00:19:28.035 "listen_address": { 00:19:28.035 "trtype": "TCP", 00:19:28.035 "adrfam": "IPv4", 00:19:28.035 "traddr": "10.0.0.2", 00:19:28.035 "trsvcid": "4420" 00:19:28.035 }, 00:19:28.035 "peer_address": { 00:19:28.035 "trtype": "TCP", 00:19:28.035 "adrfam": "IPv4", 00:19:28.035 "traddr": "10.0.0.1", 00:19:28.035 "trsvcid": "36828" 00:19:28.035 }, 00:19:28.035 "auth": { 00:19:28.035 "state": "completed", 00:19:28.035 "digest": "sha512", 00:19:28.035 "dhgroup": "null" 00:19:28.035 } 00:19:28.035 } 00:19:28.035 ]' 00:19:28.035 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:28.035 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:28.295 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:28.295 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:28.295 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:28.295 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.295 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.295 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.554 11:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MWNmZTVkMjA0NGNkNzBlNTY3YTczYzM0ZTFkMDhjY2MX3hQ5: --dhchap-ctrl-secret DHHC-1:02:ODQ5ZjliYmQ2MDBkMWJmYjlkZTdkODg1NDliNjhkYzdhMTI0NTcyODdlZDNmNWY0LdJ6lA==: 00:19:29.122 11:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.123 11:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:29.123 11:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.123 11:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.123 11:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.123 11:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:29.123 11:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:29.123 11:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:29.123 11:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:19:29.123 11:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:29.123 11:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:29.123 11:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:29.123 11:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:29.123 11:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.123 11:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.123 11:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.123 11:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.123 11:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.123 11:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.123 11:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.382 00:19:29.382 11:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.382 11:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.382 11:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.642 11:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.642 11:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.642 11:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.642 11:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.642 11:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.642 11:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.642 { 00:19:29.642 "cntlid": 101, 00:19:29.642 "qid": 0, 00:19:29.642 "state": "enabled", 00:19:29.642 "thread": "nvmf_tgt_poll_group_000", 00:19:29.642 "listen_address": { 00:19:29.642 "trtype": "TCP", 00:19:29.642 "adrfam": "IPv4", 00:19:29.642 "traddr": "10.0.0.2", 00:19:29.642 "trsvcid": "4420" 00:19:29.642 }, 00:19:29.642 "peer_address": { 00:19:29.642 "trtype": "TCP", 00:19:29.642 "adrfam": "IPv4", 00:19:29.642 "traddr": "10.0.0.1", 00:19:29.642 "trsvcid": "36866" 00:19:29.642 }, 00:19:29.642 "auth": { 00:19:29.642 "state": "completed", 00:19:29.642 "digest": "sha512", 00:19:29.642 "dhgroup": "null" 00:19:29.642 } 00:19:29.642 } 00:19:29.642 ]' 00:19:29.642 11:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.642 11:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:29.642 11:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.642 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:29.642 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.642 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.642 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.642 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.902 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NDJjOGEwYTZlYTMzMGFhY2JlNjg2MzMyNjY0ZGY5MzJmYTQ0NTQ3Y2E4Nzg4MmZl9AEgQA==: --dhchap-ctrl-secret DHHC-1:01:YTFlMWUxZWI3MTI0NTEwODJhNDk2NWFkNmNhM2I3MTkCo0Yr: 00:19:30.473 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.473 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:30.473 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.473 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.473 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.473 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.473 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:30.473 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:30.473 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:19:30.473 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.473 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:30.473 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:30.473 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:30.473 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.473 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:30.473 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.473 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.473 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.473 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:30.473 11:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:30.732 00:19:30.732 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:30.732 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:30.732 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.991 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.991 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.991 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.991 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.991 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.991 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:30.991 { 00:19:30.991 "cntlid": 103, 00:19:30.991 "qid": 0, 00:19:30.991 "state": "enabled", 00:19:30.991 "thread": "nvmf_tgt_poll_group_000", 00:19:30.991 "listen_address": { 00:19:30.991 "trtype": "TCP", 00:19:30.991 "adrfam": "IPv4", 00:19:30.991 "traddr": "10.0.0.2", 00:19:30.991 "trsvcid": "4420" 00:19:30.991 }, 00:19:30.991 "peer_address": { 00:19:30.991 "trtype": "TCP", 00:19:30.991 "adrfam": "IPv4", 00:19:30.991 "traddr": "10.0.0.1", 00:19:30.991 "trsvcid": "36886" 00:19:30.991 }, 00:19:30.991 "auth": { 00:19:30.991 "state": "completed", 00:19:30.991 "digest": "sha512", 00:19:30.991 "dhgroup": "null" 00:19:30.991 } 00:19:30.991 } 00:19:30.991 ]' 00:19:30.991 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:30.991 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:30.991 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:30.991 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:30.991 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.251 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.251 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.251 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.251 11:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTBmZjViNWJjNGUwZDgwY2E0MzExNzI3YjdlZTY5NWFlNDQ2YTg3OTQyYTI2YmZkNjMxMmExZjhlYmM5NjVlMzTaNeM=: 00:19:31.820 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.820 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:31.820 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.820 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.820 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.820 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:31.820 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.820 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:31.820 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:32.079 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:19:32.079 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:32.079 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:32.079 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:32.079 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:32.079 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.079 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.079 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.079 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.079 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.079 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.079 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.339 00:19:32.339 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:32.339 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:32.339 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.339 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.339 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.340 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.340 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.340 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.340 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:32.340 { 00:19:32.340 "cntlid": 105, 00:19:32.340 "qid": 0, 00:19:32.340 "state": "enabled", 00:19:32.340 "thread": "nvmf_tgt_poll_group_000", 00:19:32.340 "listen_address": { 00:19:32.340 "trtype": "TCP", 00:19:32.340 "adrfam": "IPv4", 00:19:32.340 "traddr": "10.0.0.2", 00:19:32.340 "trsvcid": "4420" 00:19:32.340 }, 00:19:32.340 "peer_address": { 00:19:32.340 "trtype": "TCP", 00:19:32.340 "adrfam": "IPv4", 00:19:32.340 "traddr": "10.0.0.1", 00:19:32.340 "trsvcid": "36906" 00:19:32.340 }, 00:19:32.340 "auth": { 00:19:32.340 "state": "completed", 00:19:32.340 "digest": "sha512", 00:19:32.340 "dhgroup": "ffdhe2048" 00:19:32.340 } 00:19:32.340 } 00:19:32.340 ]' 00:19:32.340 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:32.598 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:32.598 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:32.598 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:32.598 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:32.598 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.598 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.598 11:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.858 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWRlOTE5OWU5ZmZhZDBkNmJmNmJiNzI0ZWM5YWJlZTU0ODU4M2NlZmJhYjA1YjUxpvK1Zg==: --dhchap-ctrl-secret DHHC-1:03:OGZjZWE0OWJhMGZjOGI3NWEzZDc3MDcyZTNhZDMyZTZmNWIwMWUxMzRlZThkMTlhMTczNjJlNTQ2NWNkZDNlZAhZeCk=: 00:19:33.428 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.428 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:33.428 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.428 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.428 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.428 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.428 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:33.428 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:33.428 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:19:33.428 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.428 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:33.428 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:33.428 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:33.428 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.428 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.428 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.428 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.428 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.428 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.428 11:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.688 00:19:33.688 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.688 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.688 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.947 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.947 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.947 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.947 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.947 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.947 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.947 { 00:19:33.947 "cntlid": 107, 00:19:33.947 "qid": 0, 00:19:33.947 "state": "enabled", 00:19:33.947 "thread": "nvmf_tgt_poll_group_000", 00:19:33.947 "listen_address": { 00:19:33.947 "trtype": "TCP", 00:19:33.947 "adrfam": "IPv4", 00:19:33.947 "traddr": "10.0.0.2", 00:19:33.947 "trsvcid": "4420" 00:19:33.947 }, 00:19:33.947 "peer_address": { 00:19:33.947 "trtype": "TCP", 00:19:33.947 "adrfam": "IPv4", 00:19:33.947 "traddr": "10.0.0.1", 00:19:33.947 "trsvcid": "36940" 00:19:33.947 }, 00:19:33.947 "auth": { 00:19:33.947 "state": "completed", 00:19:33.947 "digest": "sha512", 00:19:33.947 "dhgroup": "ffdhe2048" 00:19:33.947 } 00:19:33.947 } 00:19:33.947 ]' 00:19:33.947 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.947 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:33.947 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.947 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:33.947 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.947 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.947 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.947 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.207 11:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MWNmZTVkMjA0NGNkNzBlNTY3YTczYzM0ZTFkMDhjY2MX3hQ5: --dhchap-ctrl-secret DHHC-1:02:ODQ5ZjliYmQ2MDBkMWJmYjlkZTdkODg1NDliNjhkYzdhMTI0NTcyODdlZDNmNWY0LdJ6lA==: 00:19:34.776 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.776 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:34.776 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.776 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.776 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.776 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:34.776 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:34.776 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:35.036 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:19:35.036 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.036 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:35.036 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:35.036 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:35.036 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.036 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.036 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.036 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.036 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.036 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.036 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.296 00:19:35.296 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.296 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:35.297 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.297 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.297 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.297 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.297 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.297 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.297 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:35.297 { 00:19:35.297 "cntlid": 109, 00:19:35.297 "qid": 0, 00:19:35.297 "state": "enabled", 00:19:35.297 "thread": "nvmf_tgt_poll_group_000", 00:19:35.297 "listen_address": { 00:19:35.297 "trtype": "TCP", 00:19:35.297 "adrfam": "IPv4", 00:19:35.297 "traddr": "10.0.0.2", 00:19:35.297 "trsvcid": "4420" 00:19:35.297 }, 00:19:35.297 "peer_address": { 00:19:35.297 "trtype": "TCP", 00:19:35.297 "adrfam": "IPv4", 00:19:35.297 "traddr": "10.0.0.1", 00:19:35.297 "trsvcid": "36956" 00:19:35.297 }, 00:19:35.297 "auth": { 00:19:35.297 "state": "completed", 00:19:35.297 "digest": "sha512", 00:19:35.297 "dhgroup": "ffdhe2048" 00:19:35.297 } 00:19:35.297 } 00:19:35.297 ]' 00:19:35.297 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:35.556 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:35.556 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:35.556 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:35.556 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:35.556 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.556 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.556 11:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.814 11:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NDJjOGEwYTZlYTMzMGFhY2JlNjg2MzMyNjY0ZGY5MzJmYTQ0NTQ3Y2E4Nzg4MmZl9AEgQA==: --dhchap-ctrl-secret DHHC-1:01:YTFlMWUxZWI3MTI0NTEwODJhNDk2NWFkNmNhM2I3MTkCo0Yr: 00:19:36.381 11:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.381 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.381 11:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:36.381 11:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.381 11:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.381 11:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.381 11:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:36.381 11:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:36.381 11:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:36.381 11:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:19:36.381 11:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:36.381 11:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:36.381 11:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:36.381 11:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:36.381 11:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.381 11:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:36.381 11:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.381 11:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.381 11:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.382 11:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:36.382 11:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:36.668 00:19:36.668 11:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.668 11:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.668 11:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.926 11:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.926 11:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.926 11:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.926 11:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.926 11:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.926 11:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.926 { 00:19:36.926 "cntlid": 111, 00:19:36.926 "qid": 0, 00:19:36.926 "state": "enabled", 00:19:36.926 "thread": "nvmf_tgt_poll_group_000", 00:19:36.926 "listen_address": { 00:19:36.926 "trtype": "TCP", 00:19:36.926 "adrfam": "IPv4", 00:19:36.926 "traddr": "10.0.0.2", 00:19:36.926 "trsvcid": "4420" 00:19:36.926 }, 00:19:36.926 "peer_address": { 00:19:36.926 "trtype": "TCP", 00:19:36.926 "adrfam": "IPv4", 00:19:36.926 "traddr": "10.0.0.1", 00:19:36.926 "trsvcid": "36990" 00:19:36.926 }, 00:19:36.926 "auth": { 00:19:36.926 "state": "completed", 00:19:36.926 "digest": "sha512", 00:19:36.926 "dhgroup": "ffdhe2048" 00:19:36.926 } 00:19:36.926 } 00:19:36.926 ]' 00:19:36.927 11:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.927 11:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:36.927 11:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.927 11:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:36.927 11:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.927 11:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.927 11:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.927 11:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.185 11:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTBmZjViNWJjNGUwZDgwY2E0MzExNzI3YjdlZTY5NWFlNDQ2YTg3OTQyYTI2YmZkNjMxMmExZjhlYmM5NjVlMzTaNeM=: 00:19:37.755 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.755 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.755 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:37.755 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.755 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.755 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.755 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:37.755 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.755 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:37.755 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:37.755 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:19:37.755 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:38.015 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:38.015 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:38.015 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:38.015 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.015 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.015 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.015 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.015 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.015 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.015 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.015 00:19:38.015 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.015 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.015 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.275 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.275 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.275 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.275 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.275 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.275 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.275 { 00:19:38.275 "cntlid": 113, 00:19:38.275 "qid": 0, 00:19:38.275 "state": "enabled", 00:19:38.275 "thread": "nvmf_tgt_poll_group_000", 00:19:38.275 "listen_address": { 00:19:38.275 "trtype": "TCP", 00:19:38.275 "adrfam": "IPv4", 00:19:38.275 "traddr": "10.0.0.2", 00:19:38.275 "trsvcid": "4420" 00:19:38.275 }, 00:19:38.275 "peer_address": { 00:19:38.275 "trtype": "TCP", 00:19:38.275 "adrfam": "IPv4", 00:19:38.275 "traddr": "10.0.0.1", 00:19:38.275 "trsvcid": "57018" 00:19:38.275 }, 00:19:38.275 "auth": { 00:19:38.275 "state": "completed", 00:19:38.275 "digest": "sha512", 00:19:38.275 "dhgroup": "ffdhe3072" 00:19:38.275 } 00:19:38.275 } 00:19:38.275 ]' 00:19:38.275 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.275 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:38.275 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.275 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:38.275 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.535 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.535 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.535 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.535 11:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWRlOTE5OWU5ZmZhZDBkNmJmNmJiNzI0ZWM5YWJlZTU0ODU4M2NlZmJhYjA1YjUxpvK1Zg==: --dhchap-ctrl-secret DHHC-1:03:OGZjZWE0OWJhMGZjOGI3NWEzZDc3MDcyZTNhZDMyZTZmNWIwMWUxMzRlZThkMTlhMTczNjJlNTQ2NWNkZDNlZAhZeCk=: 00:19:39.102 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.102 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.102 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:39.102 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.102 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.102 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.102 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.102 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:39.102 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:39.361 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:19:39.361 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.361 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:39.361 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:39.361 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:39.361 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.361 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.361 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.361 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.361 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.361 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.361 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.621 00:19:39.621 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:39.621 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:39.621 11:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.881 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.881 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.881 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.881 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.881 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.881 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:39.881 { 00:19:39.881 "cntlid": 115, 00:19:39.881 "qid": 0, 00:19:39.881 "state": "enabled", 00:19:39.881 "thread": "nvmf_tgt_poll_group_000", 00:19:39.881 "listen_address": { 00:19:39.881 "trtype": "TCP", 00:19:39.881 "adrfam": "IPv4", 00:19:39.881 "traddr": "10.0.0.2", 00:19:39.881 "trsvcid": "4420" 00:19:39.881 }, 00:19:39.881 "peer_address": { 00:19:39.881 "trtype": "TCP", 00:19:39.881 "adrfam": "IPv4", 00:19:39.881 "traddr": "10.0.0.1", 00:19:39.881 "trsvcid": "57042" 00:19:39.881 }, 00:19:39.881 "auth": { 00:19:39.881 "state": "completed", 00:19:39.881 "digest": "sha512", 00:19:39.881 "dhgroup": "ffdhe3072" 00:19:39.881 } 00:19:39.881 } 00:19:39.881 ]' 00:19:39.881 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:39.881 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:39.881 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:39.881 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:39.881 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:39.881 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.881 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.881 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.140 11:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MWNmZTVkMjA0NGNkNzBlNTY3YTczYzM0ZTFkMDhjY2MX3hQ5: --dhchap-ctrl-secret DHHC-1:02:ODQ5ZjliYmQ2MDBkMWJmYjlkZTdkODg1NDliNjhkYzdhMTI0NTcyODdlZDNmNWY0LdJ6lA==: 00:19:40.710 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.710 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:40.710 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.710 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.710 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.710 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:40.710 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:40.710 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:40.710 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:19:40.710 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.710 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:40.710 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:40.710 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:40.710 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.970 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.970 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.970 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.970 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.970 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.970 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.970 00:19:41.229 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.229 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.229 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.229 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.229 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.229 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.229 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.229 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.229 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.229 { 00:19:41.229 "cntlid": 117, 00:19:41.229 "qid": 0, 00:19:41.229 "state": "enabled", 00:19:41.229 "thread": "nvmf_tgt_poll_group_000", 00:19:41.229 "listen_address": { 00:19:41.229 "trtype": "TCP", 00:19:41.229 "adrfam": "IPv4", 00:19:41.229 "traddr": "10.0.0.2", 00:19:41.229 "trsvcid": "4420" 00:19:41.229 }, 00:19:41.229 "peer_address": { 00:19:41.229 "trtype": "TCP", 00:19:41.229 "adrfam": "IPv4", 00:19:41.229 "traddr": "10.0.0.1", 00:19:41.229 "trsvcid": "57058" 00:19:41.229 }, 00:19:41.229 "auth": { 00:19:41.229 "state": "completed", 00:19:41.229 "digest": "sha512", 00:19:41.229 "dhgroup": "ffdhe3072" 00:19:41.229 } 00:19:41.229 } 00:19:41.229 ]' 00:19:41.229 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:41.229 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:41.229 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:41.495 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:41.495 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:41.495 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.495 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.496 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.496 11:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NDJjOGEwYTZlYTMzMGFhY2JlNjg2MzMyNjY0ZGY5MzJmYTQ0NTQ3Y2E4Nzg4MmZl9AEgQA==: --dhchap-ctrl-secret DHHC-1:01:YTFlMWUxZWI3MTI0NTEwODJhNDk2NWFkNmNhM2I3MTkCo0Yr: 00:19:42.209 11:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.209 11:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:42.209 11:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.209 11:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.209 11:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.209 11:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.209 11:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:42.209 11:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:42.209 11:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:19:42.209 11:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.209 11:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:42.209 11:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:42.209 11:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:42.209 11:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.209 11:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:42.209 11:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.209 11:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.209 11:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.209 11:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:42.209 11:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:42.469 00:19:42.469 11:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:42.469 11:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:42.469 11:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.729 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.729 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.729 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.729 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.729 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.729 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:42.729 { 00:19:42.729 "cntlid": 119, 00:19:42.729 "qid": 0, 00:19:42.729 "state": "enabled", 00:19:42.729 "thread": "nvmf_tgt_poll_group_000", 00:19:42.729 "listen_address": { 00:19:42.729 "trtype": "TCP", 00:19:42.729 "adrfam": "IPv4", 00:19:42.729 "traddr": "10.0.0.2", 00:19:42.729 "trsvcid": "4420" 00:19:42.729 }, 00:19:42.729 "peer_address": { 00:19:42.729 "trtype": "TCP", 00:19:42.729 "adrfam": "IPv4", 00:19:42.729 "traddr": "10.0.0.1", 00:19:42.729 "trsvcid": "57092" 00:19:42.729 }, 00:19:42.729 "auth": { 00:19:42.729 "state": "completed", 00:19:42.729 "digest": "sha512", 00:19:42.729 "dhgroup": "ffdhe3072" 00:19:42.729 } 00:19:42.729 } 00:19:42.729 ]' 00:19:42.729 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:42.729 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:42.729 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:42.729 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:42.729 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:42.987 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.987 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.987 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.987 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTBmZjViNWJjNGUwZDgwY2E0MzExNzI3YjdlZTY5NWFlNDQ2YTg3OTQyYTI2YmZkNjMxMmExZjhlYmM5NjVlMzTaNeM=: 00:19:43.558 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.558 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:43.558 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.558 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.558 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.558 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:43.558 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:43.558 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:43.558 11:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:43.818 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:19:43.818 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:43.818 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:43.818 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:43.818 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:43.818 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.818 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.818 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.818 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.818 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.818 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.818 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.078 00:19:44.078 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:44.078 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:44.078 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.337 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.337 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.337 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.337 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.337 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.337 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:44.337 { 00:19:44.337 "cntlid": 121, 00:19:44.337 "qid": 0, 00:19:44.337 "state": "enabled", 00:19:44.337 "thread": "nvmf_tgt_poll_group_000", 00:19:44.337 "listen_address": { 00:19:44.337 "trtype": "TCP", 00:19:44.337 "adrfam": "IPv4", 00:19:44.337 "traddr": "10.0.0.2", 00:19:44.337 "trsvcid": "4420" 00:19:44.337 }, 00:19:44.337 "peer_address": { 00:19:44.337 "trtype": "TCP", 00:19:44.337 "adrfam": "IPv4", 00:19:44.337 "traddr": "10.0.0.1", 00:19:44.337 "trsvcid": "57118" 00:19:44.337 }, 00:19:44.337 "auth": { 00:19:44.337 "state": "completed", 00:19:44.337 "digest": "sha512", 00:19:44.337 "dhgroup": "ffdhe4096" 00:19:44.337 } 00:19:44.337 } 00:19:44.337 ]' 00:19:44.337 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:44.337 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:44.337 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:44.337 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:44.337 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:44.337 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.337 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.337 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.596 11:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWRlOTE5OWU5ZmZhZDBkNmJmNmJiNzI0ZWM5YWJlZTU0ODU4M2NlZmJhYjA1YjUxpvK1Zg==: --dhchap-ctrl-secret DHHC-1:03:OGZjZWE0OWJhMGZjOGI3NWEzZDc3MDcyZTNhZDMyZTZmNWIwMWUxMzRlZThkMTlhMTczNjJlNTQ2NWNkZDNlZAhZeCk=: 00:19:45.186 11:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.186 11:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:45.186 11:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.186 11:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.186 11:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.186 11:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:45.186 11:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:45.186 11:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:45.186 11:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:19:45.186 11:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:45.186 11:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:45.186 11:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:45.186 11:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:45.186 11:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.186 11:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.186 11:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.186 11:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.186 11:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.186 11:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.186 11:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.446 00:19:45.446 11:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:45.446 11:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:45.446 11:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.706 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.706 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.706 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.706 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.706 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.706 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:45.706 { 00:19:45.706 "cntlid": 123, 00:19:45.706 "qid": 0, 00:19:45.706 "state": "enabled", 00:19:45.706 "thread": "nvmf_tgt_poll_group_000", 00:19:45.706 "listen_address": { 00:19:45.706 "trtype": "TCP", 00:19:45.706 "adrfam": "IPv4", 00:19:45.706 "traddr": "10.0.0.2", 00:19:45.706 "trsvcid": "4420" 00:19:45.706 }, 00:19:45.706 "peer_address": { 00:19:45.706 "trtype": "TCP", 00:19:45.706 "adrfam": "IPv4", 00:19:45.706 "traddr": "10.0.0.1", 00:19:45.706 "trsvcid": "57156" 00:19:45.706 }, 00:19:45.706 "auth": { 00:19:45.706 "state": "completed", 00:19:45.706 "digest": "sha512", 00:19:45.706 "dhgroup": "ffdhe4096" 00:19:45.706 } 00:19:45.706 } 00:19:45.706 ]' 00:19:45.706 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.706 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:45.706 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.706 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:45.967 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:45.967 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.967 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.967 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.967 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MWNmZTVkMjA0NGNkNzBlNTY3YTczYzM0ZTFkMDhjY2MX3hQ5: --dhchap-ctrl-secret DHHC-1:02:ODQ5ZjliYmQ2MDBkMWJmYjlkZTdkODg1NDliNjhkYzdhMTI0NTcyODdlZDNmNWY0LdJ6lA==: 00:19:46.535 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.535 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.535 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:46.535 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.535 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.535 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.535 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:46.535 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:46.535 11:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:46.796 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:19:46.796 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:46.796 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:46.796 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:46.796 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:46.796 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.796 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.796 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.796 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.796 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.796 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.796 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.056 00:19:47.056 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:47.056 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:47.056 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.316 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.316 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.316 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.316 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.316 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.316 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:47.316 { 00:19:47.316 "cntlid": 125, 00:19:47.316 "qid": 0, 00:19:47.316 "state": "enabled", 00:19:47.316 "thread": "nvmf_tgt_poll_group_000", 00:19:47.316 "listen_address": { 00:19:47.316 "trtype": "TCP", 00:19:47.316 "adrfam": "IPv4", 00:19:47.316 "traddr": "10.0.0.2", 00:19:47.316 "trsvcid": "4420" 00:19:47.316 }, 00:19:47.316 "peer_address": { 00:19:47.316 "trtype": "TCP", 00:19:47.316 "adrfam": "IPv4", 00:19:47.316 "traddr": "10.0.0.1", 00:19:47.316 "trsvcid": "57188" 00:19:47.316 }, 00:19:47.316 "auth": { 00:19:47.316 "state": "completed", 00:19:47.316 "digest": "sha512", 00:19:47.316 "dhgroup": "ffdhe4096" 00:19:47.316 } 00:19:47.316 } 00:19:47.316 ]' 00:19:47.316 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:47.316 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:47.316 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:47.316 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:47.316 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:47.316 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.316 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.316 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.576 11:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NDJjOGEwYTZlYTMzMGFhY2JlNjg2MzMyNjY0ZGY5MzJmYTQ0NTQ3Y2E4Nzg4MmZl9AEgQA==: --dhchap-ctrl-secret DHHC-1:01:YTFlMWUxZWI3MTI0NTEwODJhNDk2NWFkNmNhM2I3MTkCo0Yr: 00:19:48.143 11:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.143 11:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:48.143 11:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.143 11:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.143 11:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.143 11:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:48.143 11:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:48.144 11:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:48.403 11:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:19:48.403 11:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:48.403 11:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:48.403 11:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:48.403 11:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:48.403 11:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.403 11:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:48.403 11:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.403 11:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.403 11:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.403 11:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:48.403 11:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:48.662 00:19:48.662 11:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:48.662 11:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:48.662 11:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.662 11:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.662 11:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.662 11:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.662 11:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.662 11:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.662 11:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:48.662 { 00:19:48.662 "cntlid": 127, 00:19:48.662 "qid": 0, 00:19:48.662 "state": "enabled", 00:19:48.662 "thread": "nvmf_tgt_poll_group_000", 00:19:48.662 "listen_address": { 00:19:48.662 "trtype": "TCP", 00:19:48.662 "adrfam": "IPv4", 00:19:48.662 "traddr": "10.0.0.2", 00:19:48.662 "trsvcid": "4420" 00:19:48.662 }, 00:19:48.662 "peer_address": { 00:19:48.662 "trtype": "TCP", 00:19:48.662 "adrfam": "IPv4", 00:19:48.662 "traddr": "10.0.0.1", 00:19:48.662 "trsvcid": "51494" 00:19:48.662 }, 00:19:48.662 "auth": { 00:19:48.662 "state": "completed", 00:19:48.662 "digest": "sha512", 00:19:48.662 "dhgroup": "ffdhe4096" 00:19:48.662 } 00:19:48.662 } 00:19:48.662 ]' 00:19:48.662 11:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:48.923 11:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:48.923 11:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:48.923 11:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:48.923 11:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:48.923 11:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.923 11:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.923 11:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.183 11:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTBmZjViNWJjNGUwZDgwY2E0MzExNzI3YjdlZTY5NWFlNDQ2YTg3OTQyYTI2YmZkNjMxMmExZjhlYmM5NjVlMzTaNeM=: 00:19:49.754 11:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.754 11:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:49.754 11:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.754 11:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.754 11:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.754 11:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:49.754 11:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:49.754 11:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:49.754 11:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:49.754 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:19:49.754 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:49.754 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:49.754 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:49.754 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:49.754 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.754 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.754 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.754 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.754 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.754 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.754 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.014 00:19:50.014 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:50.014 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:50.014 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.274 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.274 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.274 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.274 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.274 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.274 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:50.274 { 00:19:50.274 "cntlid": 129, 00:19:50.274 "qid": 0, 00:19:50.274 "state": "enabled", 00:19:50.274 "thread": "nvmf_tgt_poll_group_000", 00:19:50.274 "listen_address": { 00:19:50.274 "trtype": "TCP", 00:19:50.274 "adrfam": "IPv4", 00:19:50.274 "traddr": "10.0.0.2", 00:19:50.274 "trsvcid": "4420" 00:19:50.274 }, 00:19:50.274 "peer_address": { 00:19:50.274 "trtype": "TCP", 00:19:50.274 "adrfam": "IPv4", 00:19:50.274 "traddr": "10.0.0.1", 00:19:50.274 "trsvcid": "51516" 00:19:50.274 }, 00:19:50.274 "auth": { 00:19:50.274 "state": "completed", 00:19:50.274 "digest": "sha512", 00:19:50.274 "dhgroup": "ffdhe6144" 00:19:50.274 } 00:19:50.274 } 00:19:50.274 ]' 00:19:50.274 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:50.274 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:50.274 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:50.274 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:50.274 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:50.536 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.536 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.536 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.536 11:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWRlOTE5OWU5ZmZhZDBkNmJmNmJiNzI0ZWM5YWJlZTU0ODU4M2NlZmJhYjA1YjUxpvK1Zg==: --dhchap-ctrl-secret DHHC-1:03:OGZjZWE0OWJhMGZjOGI3NWEzZDc3MDcyZTNhZDMyZTZmNWIwMWUxMzRlZThkMTlhMTczNjJlNTQ2NWNkZDNlZAhZeCk=: 00:19:51.134 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.134 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.134 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:51.134 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.134 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.134 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.134 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:51.134 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:51.134 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:51.394 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:19:51.395 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:51.395 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:51.395 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:51.395 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:51.395 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.395 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.395 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.395 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.395 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.395 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.395 11:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.654 00:19:51.654 11:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:51.654 11:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:51.654 11:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.914 11:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.914 11:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.914 11:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.914 11:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.914 11:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.914 11:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:51.914 { 00:19:51.914 "cntlid": 131, 00:19:51.914 "qid": 0, 00:19:51.914 "state": "enabled", 00:19:51.914 "thread": "nvmf_tgt_poll_group_000", 00:19:51.914 "listen_address": { 00:19:51.914 "trtype": "TCP", 00:19:51.914 "adrfam": "IPv4", 00:19:51.914 "traddr": "10.0.0.2", 00:19:51.914 "trsvcid": "4420" 00:19:51.914 }, 00:19:51.914 "peer_address": { 00:19:51.914 "trtype": "TCP", 00:19:51.914 "adrfam": "IPv4", 00:19:51.914 "traddr": "10.0.0.1", 00:19:51.914 "trsvcid": "51550" 00:19:51.914 }, 00:19:51.914 "auth": { 00:19:51.914 "state": "completed", 00:19:51.914 "digest": "sha512", 00:19:51.914 "dhgroup": "ffdhe6144" 00:19:51.914 } 00:19:51.914 } 00:19:51.914 ]' 00:19:51.914 11:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:51.914 11:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:51.914 11:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:51.914 11:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:51.914 11:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:51.914 11:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.914 11:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.914 11:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.174 11:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MWNmZTVkMjA0NGNkNzBlNTY3YTczYzM0ZTFkMDhjY2MX3hQ5: --dhchap-ctrl-secret DHHC-1:02:ODQ5ZjliYmQ2MDBkMWJmYjlkZTdkODg1NDliNjhkYzdhMTI0NTcyODdlZDNmNWY0LdJ6lA==: 00:19:52.744 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.744 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:52.744 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.744 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.744 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.744 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:52.744 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:52.744 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:53.004 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:19:53.004 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:53.004 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:53.004 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:53.004 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:53.004 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.004 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.004 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.004 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.004 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.004 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.004 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.264 00:19:53.264 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:53.264 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:53.264 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.523 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.523 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.523 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.524 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.524 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.524 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:53.524 { 00:19:53.524 "cntlid": 133, 00:19:53.524 "qid": 0, 00:19:53.524 "state": "enabled", 00:19:53.524 "thread": "nvmf_tgt_poll_group_000", 00:19:53.524 "listen_address": { 00:19:53.524 "trtype": "TCP", 00:19:53.524 "adrfam": "IPv4", 00:19:53.524 "traddr": "10.0.0.2", 00:19:53.524 "trsvcid": "4420" 00:19:53.524 }, 00:19:53.524 "peer_address": { 00:19:53.524 "trtype": "TCP", 00:19:53.524 "adrfam": "IPv4", 00:19:53.524 "traddr": "10.0.0.1", 00:19:53.524 "trsvcid": "51574" 00:19:53.524 }, 00:19:53.524 "auth": { 00:19:53.524 "state": "completed", 00:19:53.524 "digest": "sha512", 00:19:53.524 "dhgroup": "ffdhe6144" 00:19:53.524 } 00:19:53.524 } 00:19:53.524 ]' 00:19:53.524 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:53.524 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:53.524 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:53.524 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:53.524 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:53.524 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.524 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.524 11:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.783 11:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NDJjOGEwYTZlYTMzMGFhY2JlNjg2MzMyNjY0ZGY5MzJmYTQ0NTQ3Y2E4Nzg4MmZl9AEgQA==: --dhchap-ctrl-secret DHHC-1:01:YTFlMWUxZWI3MTI0NTEwODJhNDk2NWFkNmNhM2I3MTkCo0Yr: 00:19:54.372 11:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.373 11:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:54.373 11:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.373 11:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.373 11:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.373 11:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:54.373 11:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:54.373 11:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:54.373 11:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:19:54.373 11:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:54.373 11:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:54.373 11:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:54.373 11:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:54.373 11:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.373 11:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:19:54.373 11:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.373 11:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.373 11:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.373 11:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:54.373 11:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:54.943 00:19:54.943 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:54.943 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:54.943 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.943 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.943 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.943 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.943 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.943 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.943 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:54.943 { 00:19:54.943 "cntlid": 135, 00:19:54.943 "qid": 0, 00:19:54.943 "state": "enabled", 00:19:54.943 "thread": "nvmf_tgt_poll_group_000", 00:19:54.943 "listen_address": { 00:19:54.943 "trtype": "TCP", 00:19:54.943 "adrfam": "IPv4", 00:19:54.943 "traddr": "10.0.0.2", 00:19:54.943 "trsvcid": "4420" 00:19:54.943 }, 00:19:54.943 "peer_address": { 00:19:54.943 "trtype": "TCP", 00:19:54.944 "adrfam": "IPv4", 00:19:54.944 "traddr": "10.0.0.1", 00:19:54.944 "trsvcid": "51614" 00:19:54.944 }, 00:19:54.944 "auth": { 00:19:54.944 "state": "completed", 00:19:54.944 "digest": "sha512", 00:19:54.944 "dhgroup": "ffdhe6144" 00:19:54.944 } 00:19:54.944 } 00:19:54.944 ]' 00:19:54.944 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:54.944 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:54.944 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:55.204 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:55.204 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:55.204 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.204 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.204 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.204 11:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTBmZjViNWJjNGUwZDgwY2E0MzExNzI3YjdlZTY5NWFlNDQ2YTg3OTQyYTI2YmZkNjMxMmExZjhlYmM5NjVlMzTaNeM=: 00:19:55.773 11:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.773 11:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:55.773 11:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.773 11:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.773 11:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.773 11:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:55.773 11:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:55.773 11:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:55.773 11:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:56.034 11:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:19:56.034 11:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:56.034 11:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:56.034 11:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:56.034 11:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:56.034 11:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.034 11:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.034 11:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.034 11:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.034 11:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.034 11:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.034 11:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.604 00:19:56.604 11:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:56.604 11:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:56.604 11:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.604 11:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.604 11:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.604 11:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.604 11:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.604 11:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.604 11:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:56.604 { 00:19:56.604 "cntlid": 137, 00:19:56.604 "qid": 0, 00:19:56.604 "state": "enabled", 00:19:56.604 "thread": "nvmf_tgt_poll_group_000", 00:19:56.604 "listen_address": { 00:19:56.604 "trtype": "TCP", 00:19:56.604 "adrfam": "IPv4", 00:19:56.604 "traddr": "10.0.0.2", 00:19:56.604 "trsvcid": "4420" 00:19:56.604 }, 00:19:56.604 "peer_address": { 00:19:56.604 "trtype": "TCP", 00:19:56.604 "adrfam": "IPv4", 00:19:56.604 "traddr": "10.0.0.1", 00:19:56.604 "trsvcid": "51620" 00:19:56.604 }, 00:19:56.604 "auth": { 00:19:56.604 "state": "completed", 00:19:56.604 "digest": "sha512", 00:19:56.604 "dhgroup": "ffdhe8192" 00:19:56.604 } 00:19:56.604 } 00:19:56.604 ]' 00:19:56.604 11:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:56.864 11:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:56.864 11:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:56.864 11:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:56.864 11:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:56.864 11:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.864 11:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.864 11:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.124 11:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWRlOTE5OWU5ZmZhZDBkNmJmNmJiNzI0ZWM5YWJlZTU0ODU4M2NlZmJhYjA1YjUxpvK1Zg==: --dhchap-ctrl-secret DHHC-1:03:OGZjZWE0OWJhMGZjOGI3NWEzZDc3MDcyZTNhZDMyZTZmNWIwMWUxMzRlZThkMTlhMTczNjJlNTQ2NWNkZDNlZAhZeCk=: 00:19:57.694 11:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.694 11:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:57.694 11:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.694 11:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.694 11:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.694 11:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:57.694 11:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:57.694 11:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:57.694 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:19:57.694 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:57.694 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:57.694 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:57.694 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:57.694 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.694 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.694 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.694 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.694 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.694 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.694 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.264 00:19:58.264 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:58.264 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:58.264 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.524 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.524 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.524 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.524 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.524 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.524 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:58.524 { 00:19:58.524 "cntlid": 139, 00:19:58.524 "qid": 0, 00:19:58.524 "state": "enabled", 00:19:58.524 "thread": "nvmf_tgt_poll_group_000", 00:19:58.524 "listen_address": { 00:19:58.524 "trtype": "TCP", 00:19:58.524 "adrfam": "IPv4", 00:19:58.524 "traddr": "10.0.0.2", 00:19:58.524 "trsvcid": "4420" 00:19:58.524 }, 00:19:58.524 "peer_address": { 00:19:58.524 "trtype": "TCP", 00:19:58.524 "adrfam": "IPv4", 00:19:58.524 "traddr": "10.0.0.1", 00:19:58.524 "trsvcid": "41638" 00:19:58.524 }, 00:19:58.524 "auth": { 00:19:58.524 "state": "completed", 00:19:58.524 "digest": "sha512", 00:19:58.525 "dhgroup": "ffdhe8192" 00:19:58.525 } 00:19:58.525 } 00:19:58.525 ]' 00:19:58.525 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:58.525 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:58.525 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:58.525 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:58.525 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:58.525 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.525 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.525 11:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.784 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MWNmZTVkMjA0NGNkNzBlNTY3YTczYzM0ZTFkMDhjY2MX3hQ5: --dhchap-ctrl-secret DHHC-1:02:ODQ5ZjliYmQ2MDBkMWJmYjlkZTdkODg1NDliNjhkYzdhMTI0NTcyODdlZDNmNWY0LdJ6lA==: 00:19:59.355 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.355 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:59.355 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.355 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.355 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.355 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:59.355 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:59.355 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:59.355 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:19:59.355 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:59.355 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:59.355 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:59.355 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:59.355 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.355 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.355 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.355 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.355 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.355 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.355 11:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.926 00:19:59.926 11:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:59.926 11:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:59.926 11:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.187 11:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.187 11:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.187 11:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.187 11:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.187 11:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.187 11:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:00.187 { 00:20:00.187 "cntlid": 141, 00:20:00.187 "qid": 0, 00:20:00.187 "state": "enabled", 00:20:00.187 "thread": "nvmf_tgt_poll_group_000", 00:20:00.187 "listen_address": { 00:20:00.187 "trtype": "TCP", 00:20:00.187 "adrfam": "IPv4", 00:20:00.187 "traddr": "10.0.0.2", 00:20:00.187 "trsvcid": "4420" 00:20:00.187 }, 00:20:00.187 "peer_address": { 00:20:00.187 "trtype": "TCP", 00:20:00.187 "adrfam": "IPv4", 00:20:00.187 "traddr": "10.0.0.1", 00:20:00.187 "trsvcid": "41660" 00:20:00.187 }, 00:20:00.187 "auth": { 00:20:00.187 "state": "completed", 00:20:00.187 "digest": "sha512", 00:20:00.187 "dhgroup": "ffdhe8192" 00:20:00.187 } 00:20:00.187 } 00:20:00.187 ]' 00:20:00.187 11:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:00.187 11:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:00.187 11:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:00.187 11:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:00.187 11:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:00.187 11:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.187 11:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.187 11:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.447 11:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:NDJjOGEwYTZlYTMzMGFhY2JlNjg2MzMyNjY0ZGY5MzJmYTQ0NTQ3Y2E4Nzg4MmZl9AEgQA==: --dhchap-ctrl-secret DHHC-1:01:YTFlMWUxZWI3MTI0NTEwODJhNDk2NWFkNmNhM2I3MTkCo0Yr: 00:20:01.018 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.018 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:01.018 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.018 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.018 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.018 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:01.018 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:01.018 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:01.278 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:20:01.278 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:01.278 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:01.278 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:01.278 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:01.278 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.278 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:20:01.278 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.278 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.278 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.278 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:01.278 11:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:01.538 00:20:01.538 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:01.538 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:01.538 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.798 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.798 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.798 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.798 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.798 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.798 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:01.798 { 00:20:01.798 "cntlid": 143, 00:20:01.798 "qid": 0, 00:20:01.798 "state": "enabled", 00:20:01.798 "thread": "nvmf_tgt_poll_group_000", 00:20:01.798 "listen_address": { 00:20:01.798 "trtype": "TCP", 00:20:01.798 "adrfam": "IPv4", 00:20:01.798 "traddr": "10.0.0.2", 00:20:01.798 "trsvcid": "4420" 00:20:01.798 }, 00:20:01.798 "peer_address": { 00:20:01.798 "trtype": "TCP", 00:20:01.798 "adrfam": "IPv4", 00:20:01.798 "traddr": "10.0.0.1", 00:20:01.798 "trsvcid": "41674" 00:20:01.798 }, 00:20:01.798 "auth": { 00:20:01.798 "state": "completed", 00:20:01.798 "digest": "sha512", 00:20:01.798 "dhgroup": "ffdhe8192" 00:20:01.798 } 00:20:01.798 } 00:20:01.798 ]' 00:20:01.798 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:01.798 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:01.798 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:01.798 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:02.058 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:02.058 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.058 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.058 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.058 11:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTBmZjViNWJjNGUwZDgwY2E0MzExNzI3YjdlZTY5NWFlNDQ2YTg3OTQyYTI2YmZkNjMxMmExZjhlYmM5NjVlMzTaNeM=: 00:20:02.625 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.625 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:02.625 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.625 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.625 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.625 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:02.625 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:20:02.625 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:02.625 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:02.625 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:02.625 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:02.885 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:20:02.885 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:02.885 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:02.885 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:02.885 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:02.885 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.885 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.885 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.885 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.885 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.885 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.885 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.455 00:20:03.455 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:03.455 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:03.455 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.455 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.455 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.455 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.455 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.455 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.455 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:03.455 { 00:20:03.455 "cntlid": 145, 00:20:03.455 "qid": 0, 00:20:03.455 "state": "enabled", 00:20:03.455 "thread": "nvmf_tgt_poll_group_000", 00:20:03.455 "listen_address": { 00:20:03.455 "trtype": "TCP", 00:20:03.455 "adrfam": "IPv4", 00:20:03.455 "traddr": "10.0.0.2", 00:20:03.455 "trsvcid": "4420" 00:20:03.455 }, 00:20:03.455 "peer_address": { 00:20:03.455 "trtype": "TCP", 00:20:03.455 "adrfam": "IPv4", 00:20:03.455 "traddr": "10.0.0.1", 00:20:03.455 "trsvcid": "41698" 00:20:03.455 }, 00:20:03.455 "auth": { 00:20:03.455 "state": "completed", 00:20:03.455 "digest": "sha512", 00:20:03.455 "dhgroup": "ffdhe8192" 00:20:03.455 } 00:20:03.455 } 00:20:03.455 ]' 00:20:03.455 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:03.455 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:03.455 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:03.715 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:03.715 11:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:03.715 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.715 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.716 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.975 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:MWRlOTE5OWU5ZmZhZDBkNmJmNmJiNzI0ZWM5YWJlZTU0ODU4M2NlZmJhYjA1YjUxpvK1Zg==: --dhchap-ctrl-secret DHHC-1:03:OGZjZWE0OWJhMGZjOGI3NWEzZDc3MDcyZTNhZDMyZTZmNWIwMWUxMzRlZThkMTlhMTczNjJlNTQ2NWNkZDNlZAhZeCk=: 00:20:04.544 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.544 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:04.544 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.544 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.544 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.544 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:20:04.544 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.544 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.544 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.544 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:04.544 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:04.544 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:04.544 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:04.544 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:04.544 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:04.544 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:04.544 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:04.544 11:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:04.806 request: 00:20:04.806 { 00:20:04.806 "name": "nvme0", 00:20:04.806 "trtype": "tcp", 00:20:04.806 "traddr": "10.0.0.2", 00:20:04.806 "adrfam": "ipv4", 00:20:04.806 "trsvcid": "4420", 00:20:04.806 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:04.806 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:04.806 "prchk_reftag": false, 00:20:04.806 "prchk_guard": false, 00:20:04.806 "hdgst": false, 00:20:04.806 "ddgst": false, 00:20:04.806 "dhchap_key": "key2", 00:20:04.806 "method": "bdev_nvme_attach_controller", 00:20:04.806 "req_id": 1 00:20:04.806 } 00:20:04.806 Got JSON-RPC error response 00:20:04.806 response: 00:20:04.806 { 00:20:04.806 "code": -5, 00:20:04.806 "message": "Input/output error" 00:20:04.806 } 00:20:04.806 11:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:04.806 11:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:04.806 11:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:04.806 11:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:04.806 11:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:04.806 11:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.806 11:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.806 11:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.806 11:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.806 11:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.806 11:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.806 11:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.806 11:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:04.806 11:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:04.806 11:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:04.806 11:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:04.806 11:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:04.806 11:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:04.806 11:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:04.806 11:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:04.807 11:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:05.388 request: 00:20:05.388 { 00:20:05.388 "name": "nvme0", 00:20:05.388 "trtype": "tcp", 00:20:05.388 "traddr": "10.0.0.2", 00:20:05.388 "adrfam": "ipv4", 00:20:05.388 "trsvcid": "4420", 00:20:05.388 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:05.388 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:05.388 "prchk_reftag": false, 00:20:05.388 "prchk_guard": false, 00:20:05.388 "hdgst": false, 00:20:05.388 "ddgst": false, 00:20:05.388 "dhchap_key": "key1", 00:20:05.388 "dhchap_ctrlr_key": "ckey2", 00:20:05.388 "method": "bdev_nvme_attach_controller", 00:20:05.388 "req_id": 1 00:20:05.388 } 00:20:05.388 Got JSON-RPC error response 00:20:05.388 response: 00:20:05.388 { 00:20:05.388 "code": -5, 00:20:05.388 "message": "Input/output error" 00:20:05.388 } 00:20:05.388 11:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:05.388 11:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:05.388 11:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:05.388 11:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:05.388 11:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:05.388 11:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.388 11:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.388 11:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.388 11:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:20:05.388 11:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.388 11:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.388 11:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.388 11:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.388 11:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:05.388 11:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.388 11:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:05.388 11:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:05.388 11:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:05.388 11:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:05.388 11:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.388 11:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.649 request: 00:20:05.649 { 00:20:05.649 "name": "nvme0", 00:20:05.649 "trtype": "tcp", 00:20:05.649 "traddr": "10.0.0.2", 00:20:05.649 "adrfam": "ipv4", 00:20:05.649 "trsvcid": "4420", 00:20:05.649 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:05.649 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:05.649 "prchk_reftag": false, 00:20:05.649 "prchk_guard": false, 00:20:05.649 "hdgst": false, 00:20:05.649 "ddgst": false, 00:20:05.649 "dhchap_key": "key1", 00:20:05.649 "dhchap_ctrlr_key": "ckey1", 00:20:05.649 "method": "bdev_nvme_attach_controller", 00:20:05.649 "req_id": 1 00:20:05.649 } 00:20:05.649 Got JSON-RPC error response 00:20:05.649 response: 00:20:05.649 { 00:20:05.649 "code": -5, 00:20:05.649 "message": "Input/output error" 00:20:05.649 } 00:20:05.649 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:05.649 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:05.649 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:05.649 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:05.649 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:05.649 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.649 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.649 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.649 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1449872 00:20:05.649 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1449872 ']' 00:20:05.649 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1449872 00:20:05.649 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:20:05.649 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:05.649 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1449872 00:20:05.908 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:05.908 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:05.908 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1449872' 00:20:05.908 killing process with pid 1449872 00:20:05.908 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1449872 00:20:05.908 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1449872 00:20:05.908 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:05.908 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:05.908 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:05.908 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.908 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1470026 00:20:05.908 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1470026 00:20:05.908 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1470026 ']' 00:20:05.908 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:05.908 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:05.908 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:05.908 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:05.908 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:05.908 11:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.849 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:06.849 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:06.849 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:06.849 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:06.849 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.849 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:06.849 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:06.849 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1470026 00:20:06.849 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1470026 ']' 00:20:06.849 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.849 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:06.849 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.849 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:06.849 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.109 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:07.109 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:07.109 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:20:07.109 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.109 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.109 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.109 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:20:07.109 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:07.109 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:07.109 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:07.109 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:07.109 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.109 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:20:07.109 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.109 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.109 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.109 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:07.109 11:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:07.679 00:20:07.679 11:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:07.679 11:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:07.679 11:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.940 11:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.940 11:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.940 11:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.940 11:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.940 11:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.940 11:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:07.940 { 00:20:07.940 "cntlid": 1, 00:20:07.940 "qid": 0, 00:20:07.940 "state": "enabled", 00:20:07.940 "thread": "nvmf_tgt_poll_group_000", 00:20:07.940 "listen_address": { 00:20:07.940 "trtype": "TCP", 00:20:07.940 "adrfam": "IPv4", 00:20:07.940 "traddr": "10.0.0.2", 00:20:07.940 "trsvcid": "4420" 00:20:07.940 }, 00:20:07.940 "peer_address": { 00:20:07.940 "trtype": "TCP", 00:20:07.940 "adrfam": "IPv4", 00:20:07.940 "traddr": "10.0.0.1", 00:20:07.940 "trsvcid": "43494" 00:20:07.940 }, 00:20:07.940 "auth": { 00:20:07.940 "state": "completed", 00:20:07.940 "digest": "sha512", 00:20:07.940 "dhgroup": "ffdhe8192" 00:20:07.940 } 00:20:07.940 } 00:20:07.940 ]' 00:20:07.940 11:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:07.940 11:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:07.940 11:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:07.940 11:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:07.940 11:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:07.940 11:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.940 11:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.940 11:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.200 11:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTBmZjViNWJjNGUwZDgwY2E0MzExNzI3YjdlZTY5NWFlNDQ2YTg3OTQyYTI2YmZkNjMxMmExZjhlYmM5NjVlMzTaNeM=: 00:20:08.771 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.771 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:08.771 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.771 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.771 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.771 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:20:08.771 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.771 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.771 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.771 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:08.771 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:08.771 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:08.771 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:08.772 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:08.772 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:08.772 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:08.772 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:08.772 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:08.772 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:08.772 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:09.032 request: 00:20:09.032 { 00:20:09.032 "name": "nvme0", 00:20:09.032 "trtype": "tcp", 00:20:09.032 "traddr": "10.0.0.2", 00:20:09.032 "adrfam": "ipv4", 00:20:09.032 "trsvcid": "4420", 00:20:09.032 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:09.032 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:09.032 "prchk_reftag": false, 00:20:09.032 "prchk_guard": false, 00:20:09.032 "hdgst": false, 00:20:09.032 "ddgst": false, 00:20:09.032 "dhchap_key": "key3", 00:20:09.032 "method": "bdev_nvme_attach_controller", 00:20:09.032 "req_id": 1 00:20:09.032 } 00:20:09.032 Got JSON-RPC error response 00:20:09.032 response: 00:20:09.032 { 00:20:09.032 "code": -5, 00:20:09.032 "message": "Input/output error" 00:20:09.032 } 00:20:09.032 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:09.032 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:09.032 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:09.032 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:09.032 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:20:09.032 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:20:09.032 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:09.032 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:09.293 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:09.293 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:09.293 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:09.293 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:09.293 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:09.293 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:09.293 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:09.293 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:09.293 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:09.293 request: 00:20:09.293 { 00:20:09.293 "name": "nvme0", 00:20:09.293 "trtype": "tcp", 00:20:09.293 "traddr": "10.0.0.2", 00:20:09.293 "adrfam": "ipv4", 00:20:09.293 "trsvcid": "4420", 00:20:09.293 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:09.293 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:09.293 "prchk_reftag": false, 00:20:09.293 "prchk_guard": false, 00:20:09.293 "hdgst": false, 00:20:09.293 "ddgst": false, 00:20:09.293 "dhchap_key": "key3", 00:20:09.293 "method": "bdev_nvme_attach_controller", 00:20:09.293 "req_id": 1 00:20:09.293 } 00:20:09.293 Got JSON-RPC error response 00:20:09.293 response: 00:20:09.293 { 00:20:09.293 "code": -5, 00:20:09.293 "message": "Input/output error" 00:20:09.293 } 00:20:09.293 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:09.293 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:09.293 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:09.293 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:09.293 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:09.293 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:20:09.293 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:09.293 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:09.293 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:09.293 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:09.553 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:09.553 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.553 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.554 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.554 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:09.554 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.554 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.554 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.554 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:09.554 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:09.554 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:09.554 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:09.554 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:09.554 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:09.554 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:09.554 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:09.554 11:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:09.814 request: 00:20:09.814 { 00:20:09.814 "name": "nvme0", 00:20:09.814 "trtype": "tcp", 00:20:09.814 "traddr": "10.0.0.2", 00:20:09.814 "adrfam": "ipv4", 00:20:09.814 "trsvcid": "4420", 00:20:09.814 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:09.814 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:20:09.814 "prchk_reftag": false, 00:20:09.814 "prchk_guard": false, 00:20:09.814 "hdgst": false, 00:20:09.814 "ddgst": false, 00:20:09.814 "dhchap_key": "key0", 00:20:09.814 "dhchap_ctrlr_key": "key1", 00:20:09.814 "method": "bdev_nvme_attach_controller", 00:20:09.814 "req_id": 1 00:20:09.814 } 00:20:09.814 Got JSON-RPC error response 00:20:09.814 response: 00:20:09.814 { 00:20:09.814 "code": -5, 00:20:09.814 "message": "Input/output error" 00:20:09.814 } 00:20:09.814 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:09.814 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:09.814 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:09.814 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:09.814 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:09.814 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:10.124 00:20:10.124 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:20:10.124 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:20:10.124 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.124 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.124 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.124 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.384 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:20:10.384 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:20:10.384 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1449943 00:20:10.384 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1449943 ']' 00:20:10.384 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1449943 00:20:10.384 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:20:10.384 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:10.384 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1449943 00:20:10.384 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:10.384 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:10.384 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1449943' 00:20:10.384 killing process with pid 1449943 00:20:10.384 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1449943 00:20:10.384 11:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1449943 00:20:10.644 11:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:10.644 11:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:10.644 11:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:20:10.644 11:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:10.644 11:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:20:10.644 11:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:10.644 11:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:10.644 rmmod nvme_tcp 00:20:10.904 rmmod nvme_fabrics 00:20:10.904 rmmod nvme_keyring 00:20:10.904 11:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:10.904 11:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:20:10.904 11:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:20:10.904 11:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1470026 ']' 00:20:10.904 11:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1470026 00:20:10.904 11:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1470026 ']' 00:20:10.904 11:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1470026 00:20:10.904 11:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:20:10.904 11:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:10.904 11:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1470026 00:20:10.904 11:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:10.904 11:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:10.904 11:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1470026' 00:20:10.904 killing process with pid 1470026 00:20:10.904 11:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1470026 00:20:10.904 11:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1470026 00:20:11.165 11:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:11.165 11:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:11.165 11:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:11.165 11:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:11.165 11:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:11.165 11:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.165 11:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:11.165 11:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.078 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:13.078 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.UUT /tmp/spdk.key-sha256.hL5 /tmp/spdk.key-sha384.tkk /tmp/spdk.key-sha512.z8v /tmp/spdk.key-sha512.oI0 /tmp/spdk.key-sha384.fAO /tmp/spdk.key-sha256.w5U '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:20:13.078 00:20:13.078 real 2m8.017s 00:20:13.078 user 4m54.849s 00:20:13.078 sys 0m18.377s 00:20:13.078 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:13.078 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.078 ************************************ 00:20:13.078 END TEST nvmf_auth_target 00:20:13.078 ************************************ 00:20:13.078 11:08:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:20:13.078 11:08:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:13.078 11:08:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:20:13.078 11:08:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:13.078 11:08:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:13.078 ************************************ 00:20:13.078 START TEST nvmf_bdevio_no_huge 00:20:13.078 ************************************ 00:20:13.078 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:13.339 * Looking for test storage... 00:20:13.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:13.339 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:13.339 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:20:13.339 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:13.339 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:13.339 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:13.339 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:13.339 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:13.339 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:13.339 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:13.339 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:13.339 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:13.339 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:13.339 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:13.339 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:13.339 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:13.339 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:13.339 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:13.339 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:13.339 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:13.339 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:13.339 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:13.339 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:13.339 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.339 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.339 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.339 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:20:13.339 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.339 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:20:13.339 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:13.339 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:13.339 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:13.339 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:13.339 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:13.339 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:13.339 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:13.339 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:13.339 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:13.339 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:13.339 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:20:13.339 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:13.340 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:13.340 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:13.340 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:13.340 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:13.340 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.340 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:13.340 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.340 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:13.340 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:13.340 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:20:13.340 11:08:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:18.630 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:18.630 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:20:18.630 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:18.630 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:18.630 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:18.630 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:18.630 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:18.630 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:20:18.630 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:18.630 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:20:18.630 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:20:18.630 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:20:18.630 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:18.631 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:18.631 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:18.631 Found net devices under 0000:86:00.0: cvl_0_0 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:18.631 Found net devices under 0000:86:00.1: cvl_0_1 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:18.631 11:08:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:18.631 11:08:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:18.631 11:08:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:18.631 11:08:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:18.631 11:08:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:18.893 11:08:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:18.893 11:08:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:18.893 11:08:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:18.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:18.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:20:18.893 00:20:18.893 --- 10.0.0.2 ping statistics --- 00:20:18.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.893 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:20:18.893 11:08:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:18.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:18.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.419 ms 00:20:18.893 00:20:18.893 --- 10.0.0.1 ping statistics --- 00:20:18.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.893 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:20:18.893 11:08:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:18.893 11:08:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:20:18.893 11:08:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:18.893 11:08:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:18.893 11:08:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:18.893 11:08:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:18.893 11:08:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:18.893 11:08:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:18.893 11:08:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:18.893 11:08:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:18.893 11:08:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:18.894 11:08:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:18.894 11:08:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:18.894 11:08:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:18.894 11:08:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1474292 00:20:18.894 11:08:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1474292 00:20:18.894 11:08:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 1474292 ']' 00:20:18.894 11:08:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.894 11:08:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:18.894 11:08:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.894 11:08:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:18.894 11:08:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:18.894 [2024-07-26 11:08:38.246764] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:18.894 [2024-07-26 11:08:38.246808] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:18.894 [2024-07-26 11:08:38.304955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:19.154 [2024-07-26 11:08:38.390298] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:19.154 [2024-07-26 11:08:38.390332] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:19.154 [2024-07-26 11:08:38.390339] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:19.154 [2024-07-26 11:08:38.390346] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:19.154 [2024-07-26 11:08:38.390351] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:19.154 [2024-07-26 11:08:38.390391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:19.154 [2024-07-26 11:08:38.390519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:20:19.154 [2024-07-26 11:08:38.390626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:19.154 [2024-07-26 11:08:38.390627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:20:19.736 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:19.736 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:20:19.736 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:19.736 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:19.736 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:19.736 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:19.736 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:19.736 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.736 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:19.736 [2024-07-26 11:08:39.113905] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:19.736 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.736 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:19.736 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.736 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:19.736 Malloc0 00:20:19.736 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.736 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:19.736 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.736 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:19.736 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.736 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:19.736 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.736 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:19.736 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.736 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:19.736 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.736 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:19.736 [2024-07-26 11:08:39.158167] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:19.736 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.736 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:19.736 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:19.736 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:20:19.736 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:20:19.736 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:19.736 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:19.736 { 00:20:19.736 "params": { 00:20:19.736 "name": "Nvme$subsystem", 00:20:19.736 "trtype": "$TEST_TRANSPORT", 00:20:19.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.736 "adrfam": "ipv4", 00:20:19.736 "trsvcid": "$NVMF_PORT", 00:20:19.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.736 "hdgst": ${hdgst:-false}, 00:20:19.736 "ddgst": ${ddgst:-false} 00:20:19.736 }, 00:20:19.736 "method": "bdev_nvme_attach_controller" 00:20:19.736 } 00:20:19.736 EOF 00:20:19.736 )") 00:20:19.736 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:20:19.736 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:20:19.736 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:20:19.736 11:08:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:19.736 "params": { 00:20:19.736 "name": "Nvme1", 00:20:19.736 "trtype": "tcp", 00:20:19.736 "traddr": "10.0.0.2", 00:20:19.736 "adrfam": "ipv4", 00:20:19.736 "trsvcid": "4420", 00:20:19.736 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.736 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:19.736 "hdgst": false, 00:20:19.736 "ddgst": false 00:20:19.736 }, 00:20:19.736 "method": "bdev_nvme_attach_controller" 00:20:19.736 }' 00:20:19.736 [2024-07-26 11:08:39.205800] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:19.736 [2024-07-26 11:08:39.205844] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1474540 ] 00:20:19.996 [2024-07-26 11:08:39.262929] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:19.996 [2024-07-26 11:08:39.351023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.996 [2024-07-26 11:08:39.351119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.996 [2024-07-26 11:08:39.351122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.257 I/O targets: 00:20:20.257 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:20.257 00:20:20.257 00:20:20.257 CUnit - A unit testing framework for C - Version 2.1-3 00:20:20.257 http://cunit.sourceforge.net/ 00:20:20.257 00:20:20.257 00:20:20.257 Suite: bdevio tests on: Nvme1n1 00:20:20.257 Test: blockdev write read block ...passed 00:20:20.257 Test: blockdev write zeroes read block ...passed 00:20:20.518 Test: blockdev write zeroes read no split ...passed 00:20:20.518 Test: blockdev write zeroes read split ...passed 00:20:20.518 Test: blockdev write zeroes read split partial ...passed 00:20:20.518 Test: blockdev reset ...[2024-07-26 11:08:39.915369] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.518 [2024-07-26 11:08:39.915432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1702300 (9): Bad file descriptor 00:20:20.518 [2024-07-26 11:08:39.976914] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:20.518 passed 00:20:20.779 Test: blockdev write read 8 blocks ...passed 00:20:20.780 Test: blockdev write read size > 128k ...passed 00:20:20.780 Test: blockdev write read invalid size ...passed 00:20:20.780 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:20.780 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:20.780 Test: blockdev write read max offset ...passed 00:20:20.780 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:20.780 Test: blockdev writev readv 8 blocks ...passed 00:20:20.780 Test: blockdev writev readv 30 x 1block ...passed 00:20:20.780 Test: blockdev writev readv block ...passed 00:20:20.780 Test: blockdev writev readv size > 128k ...passed 00:20:20.780 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:20.780 Test: blockdev comparev and writev ...[2024-07-26 11:08:40.198687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:20.780 [2024-07-26 11:08:40.198720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:20.780 [2024-07-26 11:08:40.198734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:20.780 [2024-07-26 11:08:40.198746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.780 [2024-07-26 11:08:40.199245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:20.780 [2024-07-26 11:08:40.199255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:20.780 [2024-07-26 11:08:40.199267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:20.780 [2024-07-26 11:08:40.199273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:20.780 [2024-07-26 11:08:40.199761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:20.780 [2024-07-26 11:08:40.199772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:20.780 [2024-07-26 11:08:40.199784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:20.780 [2024-07-26 11:08:40.199791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:20.780 [2024-07-26 11:08:40.200281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:20.780 [2024-07-26 11:08:40.200293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:20.780 [2024-07-26 11:08:40.200304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:20.780 [2024-07-26 11:08:40.200311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:20.780 passed 00:20:21.041 Test: blockdev nvme passthru rw ...passed 00:20:21.041 Test: blockdev nvme passthru vendor specific ...[2024-07-26 11:08:40.283980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:21.041 [2024-07-26 11:08:40.283996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:21.041 [2024-07-26 11:08:40.284402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:21.041 [2024-07-26 11:08:40.284412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:21.041 [2024-07-26 11:08:40.284814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:21.041 [2024-07-26 11:08:40.284825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:21.041 [2024-07-26 11:08:40.285229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:21.041 [2024-07-26 11:08:40.285239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:21.041 passed 00:20:21.041 Test: blockdev nvme admin passthru ...passed 00:20:21.041 Test: blockdev copy ...passed 00:20:21.041 00:20:21.041 Run Summary: Type Total Ran Passed Failed Inactive 00:20:21.041 suites 1 1 n/a 0 0 00:20:21.041 tests 23 23 23 0 0 00:20:21.041 asserts 152 152 152 0 n/a 00:20:21.041 00:20:21.041 Elapsed time = 1.358 seconds 00:20:21.302 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:21.302 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.302 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:21.302 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.302 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:21.302 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:21.302 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:21.302 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:20:21.302 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:21.302 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:20:21.302 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:21.302 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:21.302 rmmod nvme_tcp 00:20:21.302 rmmod nvme_fabrics 00:20:21.302 rmmod nvme_keyring 00:20:21.302 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:21.302 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:20:21.302 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:20:21.302 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1474292 ']' 00:20:21.302 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1474292 00:20:21.302 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 1474292 ']' 00:20:21.302 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 1474292 00:20:21.302 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:20:21.302 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:21.302 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1474292 00:20:21.302 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:20:21.302 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:20:21.302 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1474292' 00:20:21.302 killing process with pid 1474292 00:20:21.302 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 1474292 00:20:21.302 11:08:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 1474292 00:20:21.563 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:21.563 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:21.563 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:21.563 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:21.563 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:21.563 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.563 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:21.563 11:08:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.158 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:24.158 00:20:24.158 real 0m10.553s 00:20:24.158 user 0m14.566s 00:20:24.158 sys 0m5.056s 00:20:24.158 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:24.158 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:24.158 ************************************ 00:20:24.158 END TEST nvmf_bdevio_no_huge 00:20:24.158 ************************************ 00:20:24.158 11:08:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:24.158 11:08:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:24.158 11:08:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:24.158 11:08:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:24.158 ************************************ 00:20:24.158 START TEST nvmf_tls 00:20:24.158 ************************************ 00:20:24.158 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:24.158 * Looking for test storage... 00:20:24.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:24.158 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:24.158 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:24.158 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:24.158 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:24.158 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:24.158 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:24.158 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:24.158 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:24.158 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:24.158 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:24.158 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:24.158 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:24.158 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:24.158 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:24.158 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:24.158 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:24.158 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:24.158 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:24.158 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:24.158 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:24.158 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:24.158 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:24.159 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.159 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.159 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.159 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:24.159 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.159 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:20:24.159 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:24.159 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:24.159 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:24.159 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:24.159 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:24.159 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:24.159 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:24.159 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:24.159 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:24.159 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:20:24.159 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:24.159 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:24.159 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:24.159 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:24.159 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:24.159 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.159 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:24.159 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.159 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:24.159 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:24.159 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:20:24.159 11:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:29.446 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:29.446 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:29.446 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:29.447 Found net devices under 0000:86:00.0: cvl_0_0 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:29.447 Found net devices under 0000:86:00.1: cvl_0_1 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:29.447 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:29.447 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:20:29.447 00:20:29.447 --- 10.0.0.2 ping statistics --- 00:20:29.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.447 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:29.447 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:29.447 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.395 ms 00:20:29.447 00:20:29.447 --- 10.0.0.1 ping statistics --- 00:20:29.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.447 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1478243 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1478243 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1478243 ']' 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:29.447 11:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:29.447 [2024-07-26 11:08:48.679579] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:29.447 [2024-07-26 11:08:48.679622] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:29.447 EAL: No free 2048 kB hugepages reported on node 1 00:20:29.447 [2024-07-26 11:08:48.737654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.447 [2024-07-26 11:08:48.816029] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:29.447 [2024-07-26 11:08:48.816082] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:29.447 [2024-07-26 11:08:48.816089] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:29.447 [2024-07-26 11:08:48.816096] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:29.447 [2024-07-26 11:08:48.816101] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:29.447 [2024-07-26 11:08:48.816117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:30.016 11:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:30.016 11:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:30.016 11:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:30.016 11:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:30.016 11:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.276 11:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:30.277 11:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:20:30.277 11:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:30.277 true 00:20:30.277 11:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:30.277 11:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:20:30.537 11:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:20:30.537 11:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:20:30.537 11:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:30.798 11:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:30.798 11:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:20:30.798 11:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:20:30.798 11:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:20:30.798 11:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:31.057 11:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:31.057 11:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:20:31.318 11:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:20:31.318 11:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:20:31.318 11:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:31.318 11:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:20:31.318 11:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:20:31.318 11:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:20:31.318 11:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:31.577 11:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:31.577 11:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:20:31.837 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:20:31.837 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:20:31.837 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:31.837 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:31.837 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:20:32.097 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:20:32.097 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:20:32.097 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:32.097 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:32.097 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:32.097 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:32.097 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:32.097 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:32.097 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:32.097 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:32.097 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:32.097 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:32.097 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:32.097 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:32.097 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:20:32.097 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:32.097 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:32.097 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:32.097 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:20:32.097 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.PbUxqVO757 00:20:32.097 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:32.097 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.U7JGLd8LRK 00:20:32.097 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:32.097 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:32.097 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.PbUxqVO757 00:20:32.097 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.U7JGLd8LRK 00:20:32.097 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:32.357 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:32.617 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.PbUxqVO757 00:20:32.617 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.PbUxqVO757 00:20:32.617 11:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:32.617 [2024-07-26 11:08:52.089320] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:32.617 11:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:32.877 11:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:33.137 [2024-07-26 11:08:52.426164] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:33.137 [2024-07-26 11:08:52.426340] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:33.137 11:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:33.137 malloc0 00:20:33.137 11:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:33.397 11:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PbUxqVO757 00:20:33.657 [2024-07-26 11:08:52.955890] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:33.657 11:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.PbUxqVO757 00:20:33.657 EAL: No free 2048 kB hugepages reported on node 1 00:20:43.648 Initializing NVMe Controllers 00:20:43.648 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:43.648 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:43.648 Initialization complete. Launching workers. 00:20:43.648 ======================================================== 00:20:43.648 Latency(us) 00:20:43.648 Device Information : IOPS MiB/s Average min max 00:20:43.648 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16412.37 64.11 3899.91 825.63 6099.47 00:20:43.648 ======================================================== 00:20:43.648 Total : 16412.37 64.11 3899.91 825.63 6099.47 00:20:43.648 00:20:43.648 11:09:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PbUxqVO757 00:20:43.648 11:09:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:43.648 11:09:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:43.648 11:09:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:43.648 11:09:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.PbUxqVO757' 00:20:43.648 11:09:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:43.648 11:09:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1480765 00:20:43.648 11:09:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:43.648 11:09:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:43.648 11:09:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1480765 /var/tmp/bdevperf.sock 00:20:43.648 11:09:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1480765 ']' 00:20:43.648 11:09:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:43.648 11:09:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:43.648 11:09:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:43.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:43.648 11:09:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:43.648 11:09:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:43.648 [2024-07-26 11:09:03.125225] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:43.648 [2024-07-26 11:09:03.125274] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1480765 ] 00:20:43.908 EAL: No free 2048 kB hugepages reported on node 1 00:20:43.908 [2024-07-26 11:09:03.174858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.908 [2024-07-26 11:09:03.252311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.477 11:09:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:44.478 11:09:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:44.478 11:09:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PbUxqVO757 00:20:44.736 [2024-07-26 11:09:04.073918] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:44.736 [2024-07-26 11:09:04.073985] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:44.736 TLSTESTn1 00:20:44.737 11:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:44.996 Running I/O for 10 seconds... 00:20:55.045 00:20:55.045 Latency(us) 00:20:55.045 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.045 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:55.045 Verification LBA range: start 0x0 length 0x2000 00:20:55.045 TLSTESTn1 : 10.13 1016.02 3.97 0.00 0.00 125353.80 7151.97 258952.68 00:20:55.045 =================================================================================================================== 00:20:55.045 Total : 1016.02 3.97 0.00 0.00 125353.80 7151.97 258952.68 00:20:55.045 0 00:20:55.045 11:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:55.045 11:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 1480765 00:20:55.045 11:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1480765 ']' 00:20:55.045 11:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1480765 00:20:55.045 11:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:55.045 11:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:55.045 11:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1480765 00:20:55.045 11:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:55.045 11:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:55.045 11:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1480765' 00:20:55.045 killing process with pid 1480765 00:20:55.045 11:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1480765 00:20:55.045 Received shutdown signal, test time was about 10.000000 seconds 00:20:55.045 00:20:55.045 Latency(us) 00:20:55.045 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.045 =================================================================================================================== 00:20:55.045 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:55.045 [2024-07-26 11:09:14.482791] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:55.045 11:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1480765 00:20:55.305 11:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.U7JGLd8LRK 00:20:55.305 11:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:55.305 11:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.U7JGLd8LRK 00:20:55.306 11:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:55.306 11:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:55.306 11:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:55.306 11:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:55.306 11:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.U7JGLd8LRK 00:20:55.306 11:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:55.306 11:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:55.306 11:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:55.306 11:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.U7JGLd8LRK' 00:20:55.306 11:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:55.306 11:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1482995 00:20:55.306 11:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:55.306 11:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:55.306 11:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1482995 /var/tmp/bdevperf.sock 00:20:55.306 11:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1482995 ']' 00:20:55.306 11:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:55.306 11:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:55.306 11:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:55.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:55.306 11:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:55.306 11:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:55.306 [2024-07-26 11:09:14.716334] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:55.306 [2024-07-26 11:09:14.716384] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1482995 ] 00:20:55.306 EAL: No free 2048 kB hugepages reported on node 1 00:20:55.306 [2024-07-26 11:09:14.766250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.566 [2024-07-26 11:09:14.835720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:56.137 11:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:56.137 11:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:56.137 11:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.U7JGLd8LRK 00:20:56.397 [2024-07-26 11:09:15.678365] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:56.397 [2024-07-26 11:09:15.678439] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:56.397 [2024-07-26 11:09:15.684816] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:56.397 [2024-07-26 11:09:15.685991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b84570 (107): Transport endpoint is not connected 00:20:56.397 [2024-07-26 11:09:15.686985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b84570 (9): Bad file descriptor 00:20:56.397 [2024-07-26 11:09:15.687986] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:56.397 [2024-07-26 11:09:15.687996] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:56.397 [2024-07-26 11:09:15.688006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:56.397 request: 00:20:56.397 { 00:20:56.397 "name": "TLSTEST", 00:20:56.397 "trtype": "tcp", 00:20:56.397 "traddr": "10.0.0.2", 00:20:56.397 "adrfam": "ipv4", 00:20:56.397 "trsvcid": "4420", 00:20:56.397 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.397 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:56.397 "prchk_reftag": false, 00:20:56.397 "prchk_guard": false, 00:20:56.397 "hdgst": false, 00:20:56.397 "ddgst": false, 00:20:56.397 "psk": "/tmp/tmp.U7JGLd8LRK", 00:20:56.397 "method": "bdev_nvme_attach_controller", 00:20:56.397 "req_id": 1 00:20:56.397 } 00:20:56.397 Got JSON-RPC error response 00:20:56.397 response: 00:20:56.397 { 00:20:56.397 "code": -5, 00:20:56.397 "message": "Input/output error" 00:20:56.397 } 00:20:56.397 11:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1482995 00:20:56.397 11:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1482995 ']' 00:20:56.397 11:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1482995 00:20:56.397 11:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:56.397 11:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:56.397 11:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1482995 00:20:56.397 11:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:56.397 11:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:56.397 11:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1482995' 00:20:56.397 killing process with pid 1482995 00:20:56.397 11:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1482995 00:20:56.397 Received shutdown signal, test time was about 10.000000 seconds 00:20:56.397 00:20:56.397 Latency(us) 00:20:56.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.397 =================================================================================================================== 00:20:56.397 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:56.397 [2024-07-26 11:09:15.753466] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:56.397 11:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1482995 00:20:56.658 11:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:56.658 11:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:56.658 11:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:56.658 11:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:56.658 11:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:56.658 11:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.PbUxqVO757 00:20:56.658 11:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:56.658 11:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.PbUxqVO757 00:20:56.658 11:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:56.658 11:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:56.658 11:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:56.658 11:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:56.658 11:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.PbUxqVO757 00:20:56.658 11:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:56.658 11:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:56.658 11:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:56.658 11:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.PbUxqVO757' 00:20:56.658 11:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:56.658 11:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:56.658 11:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1483233 00:20:56.658 11:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:56.658 11:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1483233 /var/tmp/bdevperf.sock 00:20:56.658 11:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1483233 ']' 00:20:56.658 11:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:56.658 11:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:56.658 11:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:56.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:56.658 11:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:56.658 11:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:56.658 [2024-07-26 11:09:15.959729] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:56.658 [2024-07-26 11:09:15.959777] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1483233 ] 00:20:56.658 EAL: No free 2048 kB hugepages reported on node 1 00:20:56.658 [2024-07-26 11:09:16.009893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.658 [2024-07-26 11:09:16.077237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:56.919 11:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:56.919 11:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:56.919 11:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.PbUxqVO757 00:20:56.919 [2024-07-26 11:09:16.314184] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:56.919 [2024-07-26 11:09:16.314265] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:56.919 [2024-07-26 11:09:16.319039] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:56.919 [2024-07-26 11:09:16.319068] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:56.919 [2024-07-26 11:09:16.319093] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:56.919 [2024-07-26 11:09:16.319746] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f4570 (107): Transport endpoint is not connected 00:20:56.919 [2024-07-26 11:09:16.320737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f4570 (9): Bad file descriptor 00:20:56.919 [2024-07-26 11:09:16.321738] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:56.919 [2024-07-26 11:09:16.321748] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:56.919 [2024-07-26 11:09:16.321757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:56.919 request: 00:20:56.919 { 00:20:56.919 "name": "TLSTEST", 00:20:56.919 "trtype": "tcp", 00:20:56.919 "traddr": "10.0.0.2", 00:20:56.919 "adrfam": "ipv4", 00:20:56.919 "trsvcid": "4420", 00:20:56.919 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.919 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:56.919 "prchk_reftag": false, 00:20:56.919 "prchk_guard": false, 00:20:56.919 "hdgst": false, 00:20:56.919 "ddgst": false, 00:20:56.919 "psk": "/tmp/tmp.PbUxqVO757", 00:20:56.919 "method": "bdev_nvme_attach_controller", 00:20:56.919 "req_id": 1 00:20:56.919 } 00:20:56.919 Got JSON-RPC error response 00:20:56.919 response: 00:20:56.919 { 00:20:56.919 "code": -5, 00:20:56.919 "message": "Input/output error" 00:20:56.919 } 00:20:56.919 11:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1483233 00:20:56.919 11:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1483233 ']' 00:20:56.919 11:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1483233 00:20:56.919 11:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:56.919 11:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:56.919 11:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1483233 00:20:56.919 11:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:56.919 11:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:56.919 11:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1483233' 00:20:56.919 killing process with pid 1483233 00:20:56.919 11:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1483233 00:20:56.919 Received shutdown signal, test time was about 10.000000 seconds 00:20:56.919 00:20:56.919 Latency(us) 00:20:56.919 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.919 =================================================================================================================== 00:20:56.919 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:56.919 [2024-07-26 11:09:16.382908] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:56.919 11:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1483233 00:20:57.180 11:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:57.180 11:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:57.180 11:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:57.180 11:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:57.180 11:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:57.180 11:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.PbUxqVO757 00:20:57.180 11:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:57.180 11:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.PbUxqVO757 00:20:57.180 11:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:57.180 11:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:57.180 11:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:57.180 11:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:57.180 11:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.PbUxqVO757 00:20:57.180 11:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:57.180 11:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:57.180 11:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:57.180 11:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.PbUxqVO757' 00:20:57.180 11:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:57.180 11:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1483401 00:20:57.180 11:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:57.180 11:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:57.180 11:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1483401 /var/tmp/bdevperf.sock 00:20:57.180 11:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1483401 ']' 00:20:57.180 11:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:57.180 11:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:57.180 11:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:57.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:57.180 11:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:57.180 11:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:57.180 [2024-07-26 11:09:16.608705] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:57.180 [2024-07-26 11:09:16.608751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1483401 ] 00:20:57.180 EAL: No free 2048 kB hugepages reported on node 1 00:20:57.180 [2024-07-26 11:09:16.660482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.441 [2024-07-26 11:09:16.733118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:58.011 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:58.012 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:58.012 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PbUxqVO757 00:20:58.272 [2024-07-26 11:09:17.566895] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:58.272 [2024-07-26 11:09:17.566974] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:58.272 [2024-07-26 11:09:17.571773] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:58.272 [2024-07-26 11:09:17.571793] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:58.272 [2024-07-26 11:09:17.571816] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:58.272 [2024-07-26 11:09:17.572484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243e570 (107): Transport endpoint is not connected 00:20:58.272 [2024-07-26 11:09:17.573475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243e570 (9): Bad file descriptor 00:20:58.272 [2024-07-26 11:09:17.574476] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:58.272 [2024-07-26 11:09:17.574486] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:58.272 [2024-07-26 11:09:17.574495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:58.272 request: 00:20:58.272 { 00:20:58.272 "name": "TLSTEST", 00:20:58.272 "trtype": "tcp", 00:20:58.272 "traddr": "10.0.0.2", 00:20:58.272 "adrfam": "ipv4", 00:20:58.272 "trsvcid": "4420", 00:20:58.272 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:58.272 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:58.272 "prchk_reftag": false, 00:20:58.272 "prchk_guard": false, 00:20:58.272 "hdgst": false, 00:20:58.272 "ddgst": false, 00:20:58.272 "psk": "/tmp/tmp.PbUxqVO757", 00:20:58.272 "method": "bdev_nvme_attach_controller", 00:20:58.272 "req_id": 1 00:20:58.272 } 00:20:58.272 Got JSON-RPC error response 00:20:58.272 response: 00:20:58.272 { 00:20:58.272 "code": -5, 00:20:58.272 "message": "Input/output error" 00:20:58.272 } 00:20:58.272 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1483401 00:20:58.272 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1483401 ']' 00:20:58.272 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1483401 00:20:58.272 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:58.272 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:58.272 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1483401 00:20:58.272 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:58.272 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:58.272 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1483401' 00:20:58.272 killing process with pid 1483401 00:20:58.272 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1483401 00:20:58.272 Received shutdown signal, test time was about 10.000000 seconds 00:20:58.272 00:20:58.272 Latency(us) 00:20:58.272 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.272 =================================================================================================================== 00:20:58.272 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:58.272 [2024-07-26 11:09:17.636939] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:58.272 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1483401 00:20:58.534 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:58.534 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:58.534 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:58.534 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:58.534 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:58.534 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:58.534 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:58.534 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:58.534 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:58.534 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:58.534 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:58.534 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:58.534 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:58.534 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:58.534 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:58.534 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:58.534 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:58.534 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:58.534 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1483574 00:20:58.534 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:58.534 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:58.534 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1483574 /var/tmp/bdevperf.sock 00:20:58.534 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1483574 ']' 00:20:58.534 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:58.534 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:58.534 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:58.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:58.534 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:58.534 11:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:58.534 [2024-07-26 11:09:17.859309] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:58.534 [2024-07-26 11:09:17.859361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1483574 ] 00:20:58.534 EAL: No free 2048 kB hugepages reported on node 1 00:20:58.534 [2024-07-26 11:09:17.912000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.534 [2024-07-26 11:09:17.985711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:59.476 11:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:59.477 11:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:59.477 11:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:59.477 [2024-07-26 11:09:18.822783] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:59.477 [2024-07-26 11:09:18.825019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x240baf0 (9): Bad file descriptor 00:20:59.477 [2024-07-26 11:09:18.826017] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:59.477 [2024-07-26 11:09:18.826028] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:59.477 [2024-07-26 11:09:18.826037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:59.477 request: 00:20:59.477 { 00:20:59.477 "name": "TLSTEST", 00:20:59.477 "trtype": "tcp", 00:20:59.477 "traddr": "10.0.0.2", 00:20:59.477 "adrfam": "ipv4", 00:20:59.477 "trsvcid": "4420", 00:20:59.477 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.477 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:59.477 "prchk_reftag": false, 00:20:59.477 "prchk_guard": false, 00:20:59.477 "hdgst": false, 00:20:59.477 "ddgst": false, 00:20:59.477 "method": "bdev_nvme_attach_controller", 00:20:59.477 "req_id": 1 00:20:59.477 } 00:20:59.477 Got JSON-RPC error response 00:20:59.477 response: 00:20:59.477 { 00:20:59.477 "code": -5, 00:20:59.477 "message": "Input/output error" 00:20:59.477 } 00:20:59.477 11:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1483574 00:20:59.477 11:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1483574 ']' 00:20:59.477 11:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1483574 00:20:59.477 11:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:59.477 11:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:59.477 11:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1483574 00:20:59.477 11:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:59.477 11:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:59.477 11:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1483574' 00:20:59.477 killing process with pid 1483574 00:20:59.477 11:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1483574 00:20:59.477 Received shutdown signal, test time was about 10.000000 seconds 00:20:59.477 00:20:59.477 Latency(us) 00:20:59.477 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.477 =================================================================================================================== 00:20:59.477 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:59.477 11:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1483574 00:20:59.737 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:59.737 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:59.737 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:59.737 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:59.737 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:59.737 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 1478243 00:20:59.737 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1478243 ']' 00:20:59.737 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1478243 00:20:59.737 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:59.737 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:59.737 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1478243 00:20:59.737 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:59.737 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:59.737 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1478243' 00:20:59.737 killing process with pid 1478243 00:20:59.737 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1478243 00:20:59.737 [2024-07-26 11:09:19.109783] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:59.737 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1478243 00:20:59.997 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:59.997 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:59.997 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:59.997 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:59.997 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:59.997 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:20:59.997 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:59.997 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:59.997 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:20:59.997 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.aPzY4adk1k 00:20:59.997 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:59.997 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.aPzY4adk1k 00:20:59.997 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:20:59.997 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:59.997 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:59.998 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.998 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:59.998 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1483925 00:20:59.998 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1483925 00:20:59.998 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1483925 ']' 00:20:59.998 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:59.998 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:59.998 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:59.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:59.998 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:59.998 11:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.998 [2024-07-26 11:09:19.389014] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:59.998 [2024-07-26 11:09:19.389068] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:59.998 EAL: No free 2048 kB hugepages reported on node 1 00:20:59.998 [2024-07-26 11:09:19.444509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.258 [2024-07-26 11:09:19.522303] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:00.258 [2024-07-26 11:09:19.522343] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:00.258 [2024-07-26 11:09:19.522350] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:00.258 [2024-07-26 11:09:19.522356] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:00.258 [2024-07-26 11:09:19.522361] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:00.258 [2024-07-26 11:09:19.522378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:00.830 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:00.830 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:00.830 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:00.830 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:00.830 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.830 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:00.830 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.aPzY4adk1k 00:21:00.830 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.aPzY4adk1k 00:21:00.830 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:01.090 [2024-07-26 11:09:20.410409] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:01.090 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:01.351 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:01.351 [2024-07-26 11:09:20.763317] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:01.351 [2024-07-26 11:09:20.763492] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:01.351 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:01.610 malloc0 00:21:01.610 11:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:01.870 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aPzY4adk1k 00:21:01.870 [2024-07-26 11:09:21.284680] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:01.870 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aPzY4adk1k 00:21:01.870 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:01.870 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:01.870 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:01.870 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.aPzY4adk1k' 00:21:01.870 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:01.870 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:01.870 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1484209 00:21:01.870 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:01.870 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1484209 /var/tmp/bdevperf.sock 00:21:01.870 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1484209 ']' 00:21:01.870 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:01.870 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:01.870 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:01.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:01.870 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:01.870 11:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:01.870 [2024-07-26 11:09:21.336923] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:01.870 [2024-07-26 11:09:21.336966] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1484209 ] 00:21:01.870 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.130 [2024-07-26 11:09:21.387070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.130 [2024-07-26 11:09:21.460459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:02.700 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:02.701 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:02.701 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aPzY4adk1k 00:21:02.960 [2024-07-26 11:09:22.299066] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:02.960 [2024-07-26 11:09:22.299139] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:02.960 TLSTESTn1 00:21:02.960 11:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:03.219 Running I/O for 10 seconds... 00:21:13.223 00:21:13.223 Latency(us) 00:21:13.223 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:13.223 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:13.223 Verification LBA range: start 0x0 length 0x2000 00:21:13.223 TLSTESTn1 : 10.12 1130.07 4.41 0.00 0.00 112766.95 7180.47 165036.74 00:21:13.223 =================================================================================================================== 00:21:13.223 Total : 1130.07 4.41 0.00 0.00 112766.95 7180.47 165036.74 00:21:13.223 0 00:21:13.223 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:13.223 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 1484209 00:21:13.223 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1484209 ']' 00:21:13.223 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1484209 00:21:13.223 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:13.223 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:13.223 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1484209 00:21:13.223 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:13.223 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:13.223 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1484209' 00:21:13.223 killing process with pid 1484209 00:21:13.223 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1484209 00:21:13.223 Received shutdown signal, test time was about 10.000000 seconds 00:21:13.223 00:21:13.223 Latency(us) 00:21:13.223 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:13.223 =================================================================================================================== 00:21:13.223 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:13.223 [2024-07-26 11:09:32.706620] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:13.223 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1484209 00:21:13.483 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.aPzY4adk1k 00:21:13.483 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aPzY4adk1k 00:21:13.483 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:13.483 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aPzY4adk1k 00:21:13.483 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:13.483 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:13.483 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:13.483 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:13.483 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aPzY4adk1k 00:21:13.483 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:13.483 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:13.483 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:13.483 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.aPzY4adk1k' 00:21:13.484 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:13.484 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1486048 00:21:13.484 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:13.484 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:13.484 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1486048 /var/tmp/bdevperf.sock 00:21:13.484 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1486048 ']' 00:21:13.484 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:13.484 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:13.484 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:13.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:13.484 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:13.484 11:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.484 [2024-07-26 11:09:32.945851] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:13.484 [2024-07-26 11:09:32.945902] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1486048 ] 00:21:13.484 EAL: No free 2048 kB hugepages reported on node 1 00:21:13.745 [2024-07-26 11:09:32.997837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.745 [2024-07-26 11:09:33.070695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:14.316 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:14.316 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:14.316 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aPzY4adk1k 00:21:14.577 [2024-07-26 11:09:33.901157] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:14.577 [2024-07-26 11:09:33.901207] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:14.577 [2024-07-26 11:09:33.901215] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.aPzY4adk1k 00:21:14.577 request: 00:21:14.577 { 00:21:14.577 "name": "TLSTEST", 00:21:14.577 "trtype": "tcp", 00:21:14.577 "traddr": "10.0.0.2", 00:21:14.577 "adrfam": "ipv4", 00:21:14.577 "trsvcid": "4420", 00:21:14.577 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.577 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:14.577 "prchk_reftag": false, 00:21:14.577 "prchk_guard": false, 00:21:14.577 "hdgst": false, 00:21:14.577 "ddgst": false, 00:21:14.577 "psk": "/tmp/tmp.aPzY4adk1k", 00:21:14.577 "method": "bdev_nvme_attach_controller", 00:21:14.577 "req_id": 1 00:21:14.577 } 00:21:14.577 Got JSON-RPC error response 00:21:14.577 response: 00:21:14.577 { 00:21:14.577 "code": -1, 00:21:14.577 "message": "Operation not permitted" 00:21:14.577 } 00:21:14.577 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1486048 00:21:14.577 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1486048 ']' 00:21:14.577 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1486048 00:21:14.577 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:14.577 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:14.577 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1486048 00:21:14.577 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:14.577 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:14.577 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1486048' 00:21:14.577 killing process with pid 1486048 00:21:14.577 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1486048 00:21:14.577 Received shutdown signal, test time was about 10.000000 seconds 00:21:14.577 00:21:14.577 Latency(us) 00:21:14.577 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.577 =================================================================================================================== 00:21:14.577 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:14.577 11:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1486048 00:21:14.838 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:14.838 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:14.838 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:14.838 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:14.838 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:14.838 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 1483925 00:21:14.838 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1483925 ']' 00:21:14.838 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1483925 00:21:14.838 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:14.838 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:14.838 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1483925 00:21:14.838 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:14.838 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:14.838 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1483925' 00:21:14.838 killing process with pid 1483925 00:21:14.838 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1483925 00:21:14.838 [2024-07-26 11:09:34.179891] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:14.838 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1483925 00:21:15.100 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:21:15.100 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:15.100 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:15.100 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:15.100 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1486315 00:21:15.100 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1486315 00:21:15.100 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:15.100 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1486315 ']' 00:21:15.100 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:15.100 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:15.100 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:15.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:15.100 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:15.100 11:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:15.100 [2024-07-26 11:09:34.423458] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:15.100 [2024-07-26 11:09:34.423503] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:15.100 EAL: No free 2048 kB hugepages reported on node 1 00:21:15.100 [2024-07-26 11:09:34.479455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.100 [2024-07-26 11:09:34.557593] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:15.100 [2024-07-26 11:09:34.557630] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:15.100 [2024-07-26 11:09:34.557638] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:15.100 [2024-07-26 11:09:34.557645] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:15.100 [2024-07-26 11:09:34.557650] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:15.100 [2024-07-26 11:09:34.557666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:16.081 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:16.081 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:16.081 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:16.081 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:16.081 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:16.081 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:16.081 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.aPzY4adk1k 00:21:16.081 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:16.081 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.aPzY4adk1k 00:21:16.081 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:21:16.081 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:16.081 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:21:16.081 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:16.081 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.aPzY4adk1k 00:21:16.081 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.aPzY4adk1k 00:21:16.081 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:16.081 [2024-07-26 11:09:35.429835] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:16.081 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:16.342 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:16.342 [2024-07-26 11:09:35.770717] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:16.342 [2024-07-26 11:09:35.770896] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:16.342 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:16.602 malloc0 00:21:16.602 11:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:16.863 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aPzY4adk1k 00:21:16.863 [2024-07-26 11:09:36.296503] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:16.863 [2024-07-26 11:09:36.296532] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:21:16.863 [2024-07-26 11:09:36.296554] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:16.863 request: 00:21:16.863 { 00:21:16.863 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:16.863 "host": "nqn.2016-06.io.spdk:host1", 00:21:16.863 "psk": "/tmp/tmp.aPzY4adk1k", 00:21:16.863 "method": "nvmf_subsystem_add_host", 00:21:16.863 "req_id": 1 00:21:16.863 } 00:21:16.863 Got JSON-RPC error response 00:21:16.863 response: 00:21:16.863 { 00:21:16.863 "code": -32603, 00:21:16.863 "message": "Internal error" 00:21:16.863 } 00:21:16.863 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:16.863 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:16.863 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:16.863 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:16.863 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 1486315 00:21:16.863 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1486315 ']' 00:21:16.863 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1486315 00:21:16.863 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:16.863 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:16.863 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1486315 00:21:17.123 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:17.123 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:17.123 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1486315' 00:21:17.123 killing process with pid 1486315 00:21:17.123 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1486315 00:21:17.123 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1486315 00:21:17.123 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.aPzY4adk1k 00:21:17.123 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:21:17.123 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:17.123 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:17.123 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.123 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1486778 00:21:17.123 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:17.123 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1486778 00:21:17.123 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1486778 ']' 00:21:17.123 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.123 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:17.123 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:17.123 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:17.123 11:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.123 [2024-07-26 11:09:36.612385] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:17.123 [2024-07-26 11:09:36.612431] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:17.383 EAL: No free 2048 kB hugepages reported on node 1 00:21:17.383 [2024-07-26 11:09:36.669593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.384 [2024-07-26 11:09:36.736289] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:17.384 [2024-07-26 11:09:36.736330] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:17.384 [2024-07-26 11:09:36.736336] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:17.384 [2024-07-26 11:09:36.736342] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:17.384 [2024-07-26 11:09:36.736347] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:17.384 [2024-07-26 11:09:36.736366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:17.954 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:17.954 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:17.954 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:17.954 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:17.954 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.954 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:17.954 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.aPzY4adk1k 00:21:17.954 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.aPzY4adk1k 00:21:17.954 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:18.215 [2024-07-26 11:09:37.603532] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:18.215 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:18.475 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:18.476 [2024-07-26 11:09:37.948426] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:18.476 [2024-07-26 11:09:37.948612] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:18.476 11:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:18.736 malloc0 00:21:18.736 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:18.996 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aPzY4adk1k 00:21:18.996 [2024-07-26 11:09:38.469963] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:18.996 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1487043 00:21:18.996 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:18.996 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:18.996 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1487043 /var/tmp/bdevperf.sock 00:21:18.996 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1487043 ']' 00:21:18.996 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:18.996 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:18.996 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:18.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:18.996 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:18.996 11:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:19.256 [2024-07-26 11:09:38.528522] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:19.256 [2024-07-26 11:09:38.528568] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1487043 ] 00:21:19.256 EAL: No free 2048 kB hugepages reported on node 1 00:21:19.256 [2024-07-26 11:09:38.578957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.256 [2024-07-26 11:09:38.657524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:19.826 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:19.826 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:19.826 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aPzY4adk1k 00:21:20.087 [2024-07-26 11:09:39.464597] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:20.087 [2024-07-26 11:09:39.464661] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:20.087 TLSTESTn1 00:21:20.087 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:20.347 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:21:20.347 "subsystems": [ 00:21:20.347 { 00:21:20.347 "subsystem": "keyring", 00:21:20.347 "config": [] 00:21:20.347 }, 00:21:20.347 { 00:21:20.347 "subsystem": "iobuf", 00:21:20.347 "config": [ 00:21:20.347 { 00:21:20.347 "method": "iobuf_set_options", 00:21:20.347 "params": { 00:21:20.347 "small_pool_count": 8192, 00:21:20.347 "large_pool_count": 1024, 00:21:20.347 "small_bufsize": 8192, 00:21:20.347 "large_bufsize": 135168 00:21:20.347 } 00:21:20.347 } 00:21:20.347 ] 00:21:20.347 }, 00:21:20.347 { 00:21:20.347 "subsystem": "sock", 00:21:20.347 "config": [ 00:21:20.347 { 00:21:20.347 "method": "sock_set_default_impl", 00:21:20.347 "params": { 00:21:20.347 "impl_name": "posix" 00:21:20.347 } 00:21:20.347 }, 00:21:20.347 { 00:21:20.347 "method": "sock_impl_set_options", 00:21:20.347 "params": { 00:21:20.347 "impl_name": "ssl", 00:21:20.347 "recv_buf_size": 4096, 00:21:20.347 "send_buf_size": 4096, 00:21:20.347 "enable_recv_pipe": true, 00:21:20.347 "enable_quickack": false, 00:21:20.347 "enable_placement_id": 0, 00:21:20.347 "enable_zerocopy_send_server": true, 00:21:20.347 "enable_zerocopy_send_client": false, 00:21:20.347 "zerocopy_threshold": 0, 00:21:20.347 "tls_version": 0, 00:21:20.347 "enable_ktls": false 00:21:20.347 } 00:21:20.347 }, 00:21:20.347 { 00:21:20.347 "method": "sock_impl_set_options", 00:21:20.347 "params": { 00:21:20.347 "impl_name": "posix", 00:21:20.347 "recv_buf_size": 2097152, 00:21:20.347 "send_buf_size": 2097152, 00:21:20.347 "enable_recv_pipe": true, 00:21:20.347 "enable_quickack": false, 00:21:20.347 "enable_placement_id": 0, 00:21:20.347 "enable_zerocopy_send_server": true, 00:21:20.347 "enable_zerocopy_send_client": false, 00:21:20.347 "zerocopy_threshold": 0, 00:21:20.347 "tls_version": 0, 00:21:20.347 "enable_ktls": false 00:21:20.347 } 00:21:20.347 } 00:21:20.347 ] 00:21:20.347 }, 00:21:20.347 { 00:21:20.347 "subsystem": "vmd", 00:21:20.347 "config": [] 00:21:20.347 }, 00:21:20.347 { 00:21:20.347 "subsystem": "accel", 00:21:20.347 "config": [ 00:21:20.347 { 00:21:20.347 "method": "accel_set_options", 00:21:20.347 "params": { 00:21:20.347 "small_cache_size": 128, 00:21:20.347 "large_cache_size": 16, 00:21:20.347 "task_count": 2048, 00:21:20.347 "sequence_count": 2048, 00:21:20.347 "buf_count": 2048 00:21:20.347 } 00:21:20.347 } 00:21:20.347 ] 00:21:20.347 }, 00:21:20.347 { 00:21:20.347 "subsystem": "bdev", 00:21:20.347 "config": [ 00:21:20.347 { 00:21:20.347 "method": "bdev_set_options", 00:21:20.347 "params": { 00:21:20.347 "bdev_io_pool_size": 65535, 00:21:20.347 "bdev_io_cache_size": 256, 00:21:20.347 "bdev_auto_examine": true, 00:21:20.347 "iobuf_small_cache_size": 128, 00:21:20.347 "iobuf_large_cache_size": 16 00:21:20.347 } 00:21:20.347 }, 00:21:20.347 { 00:21:20.347 "method": "bdev_raid_set_options", 00:21:20.347 "params": { 00:21:20.347 "process_window_size_kb": 1024, 00:21:20.347 "process_max_bandwidth_mb_sec": 0 00:21:20.347 } 00:21:20.347 }, 00:21:20.347 { 00:21:20.347 "method": "bdev_iscsi_set_options", 00:21:20.347 "params": { 00:21:20.347 "timeout_sec": 30 00:21:20.347 } 00:21:20.347 }, 00:21:20.347 { 00:21:20.347 "method": "bdev_nvme_set_options", 00:21:20.347 "params": { 00:21:20.347 "action_on_timeout": "none", 00:21:20.347 "timeout_us": 0, 00:21:20.347 "timeout_admin_us": 0, 00:21:20.347 "keep_alive_timeout_ms": 10000, 00:21:20.347 "arbitration_burst": 0, 00:21:20.347 "low_priority_weight": 0, 00:21:20.347 "medium_priority_weight": 0, 00:21:20.347 "high_priority_weight": 0, 00:21:20.347 "nvme_adminq_poll_period_us": 10000, 00:21:20.347 "nvme_ioq_poll_period_us": 0, 00:21:20.347 "io_queue_requests": 0, 00:21:20.347 "delay_cmd_submit": true, 00:21:20.347 "transport_retry_count": 4, 00:21:20.347 "bdev_retry_count": 3, 00:21:20.347 "transport_ack_timeout": 0, 00:21:20.347 "ctrlr_loss_timeout_sec": 0, 00:21:20.347 "reconnect_delay_sec": 0, 00:21:20.347 "fast_io_fail_timeout_sec": 0, 00:21:20.347 "disable_auto_failback": false, 00:21:20.347 "generate_uuids": false, 00:21:20.347 "transport_tos": 0, 00:21:20.347 "nvme_error_stat": false, 00:21:20.347 "rdma_srq_size": 0, 00:21:20.347 "io_path_stat": false, 00:21:20.347 "allow_accel_sequence": false, 00:21:20.347 "rdma_max_cq_size": 0, 00:21:20.347 "rdma_cm_event_timeout_ms": 0, 00:21:20.348 "dhchap_digests": [ 00:21:20.348 "sha256", 00:21:20.348 "sha384", 00:21:20.348 "sha512" 00:21:20.348 ], 00:21:20.348 "dhchap_dhgroups": [ 00:21:20.348 "null", 00:21:20.348 "ffdhe2048", 00:21:20.348 "ffdhe3072", 00:21:20.348 "ffdhe4096", 00:21:20.348 "ffdhe6144", 00:21:20.348 "ffdhe8192" 00:21:20.348 ] 00:21:20.348 } 00:21:20.348 }, 00:21:20.348 { 00:21:20.348 "method": "bdev_nvme_set_hotplug", 00:21:20.348 "params": { 00:21:20.348 "period_us": 100000, 00:21:20.348 "enable": false 00:21:20.348 } 00:21:20.348 }, 00:21:20.348 { 00:21:20.348 "method": "bdev_malloc_create", 00:21:20.348 "params": { 00:21:20.348 "name": "malloc0", 00:21:20.348 "num_blocks": 8192, 00:21:20.348 "block_size": 4096, 00:21:20.348 "physical_block_size": 4096, 00:21:20.348 "uuid": "241d6e0b-00fe-47c1-b2f2-0aaac0621278", 00:21:20.348 "optimal_io_boundary": 0, 00:21:20.348 "md_size": 0, 00:21:20.348 "dif_type": 0, 00:21:20.348 "dif_is_head_of_md": false, 00:21:20.348 "dif_pi_format": 0 00:21:20.348 } 00:21:20.348 }, 00:21:20.348 { 00:21:20.348 "method": "bdev_wait_for_examine" 00:21:20.348 } 00:21:20.348 ] 00:21:20.348 }, 00:21:20.348 { 00:21:20.348 "subsystem": "nbd", 00:21:20.348 "config": [] 00:21:20.348 }, 00:21:20.348 { 00:21:20.348 "subsystem": "scheduler", 00:21:20.348 "config": [ 00:21:20.348 { 00:21:20.348 "method": "framework_set_scheduler", 00:21:20.348 "params": { 00:21:20.348 "name": "static" 00:21:20.348 } 00:21:20.348 } 00:21:20.348 ] 00:21:20.348 }, 00:21:20.348 { 00:21:20.348 "subsystem": "nvmf", 00:21:20.348 "config": [ 00:21:20.348 { 00:21:20.348 "method": "nvmf_set_config", 00:21:20.348 "params": { 00:21:20.348 "discovery_filter": "match_any", 00:21:20.348 "admin_cmd_passthru": { 00:21:20.348 "identify_ctrlr": false 00:21:20.348 } 00:21:20.348 } 00:21:20.348 }, 00:21:20.348 { 00:21:20.348 "method": "nvmf_set_max_subsystems", 00:21:20.348 "params": { 00:21:20.348 "max_subsystems": 1024 00:21:20.348 } 00:21:20.348 }, 00:21:20.348 { 00:21:20.348 "method": "nvmf_set_crdt", 00:21:20.348 "params": { 00:21:20.348 "crdt1": 0, 00:21:20.348 "crdt2": 0, 00:21:20.348 "crdt3": 0 00:21:20.348 } 00:21:20.348 }, 00:21:20.348 { 00:21:20.348 "method": "nvmf_create_transport", 00:21:20.348 "params": { 00:21:20.348 "trtype": "TCP", 00:21:20.348 "max_queue_depth": 128, 00:21:20.348 "max_io_qpairs_per_ctrlr": 127, 00:21:20.348 "in_capsule_data_size": 4096, 00:21:20.348 "max_io_size": 131072, 00:21:20.348 "io_unit_size": 131072, 00:21:20.348 "max_aq_depth": 128, 00:21:20.348 "num_shared_buffers": 511, 00:21:20.348 "buf_cache_size": 4294967295, 00:21:20.348 "dif_insert_or_strip": false, 00:21:20.348 "zcopy": false, 00:21:20.348 "c2h_success": false, 00:21:20.348 "sock_priority": 0, 00:21:20.348 "abort_timeout_sec": 1, 00:21:20.348 "ack_timeout": 0, 00:21:20.348 "data_wr_pool_size": 0 00:21:20.348 } 00:21:20.348 }, 00:21:20.348 { 00:21:20.348 "method": "nvmf_create_subsystem", 00:21:20.348 "params": { 00:21:20.348 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.348 "allow_any_host": false, 00:21:20.348 "serial_number": "SPDK00000000000001", 00:21:20.348 "model_number": "SPDK bdev Controller", 00:21:20.348 "max_namespaces": 10, 00:21:20.348 "min_cntlid": 1, 00:21:20.348 "max_cntlid": 65519, 00:21:20.348 "ana_reporting": false 00:21:20.348 } 00:21:20.348 }, 00:21:20.348 { 00:21:20.348 "method": "nvmf_subsystem_add_host", 00:21:20.348 "params": { 00:21:20.348 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.348 "host": "nqn.2016-06.io.spdk:host1", 00:21:20.348 "psk": "/tmp/tmp.aPzY4adk1k" 00:21:20.348 } 00:21:20.348 }, 00:21:20.348 { 00:21:20.348 "method": "nvmf_subsystem_add_ns", 00:21:20.348 "params": { 00:21:20.348 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.348 "namespace": { 00:21:20.348 "nsid": 1, 00:21:20.348 "bdev_name": "malloc0", 00:21:20.348 "nguid": "241D6E0B00FE47C1B2F20AAAC0621278", 00:21:20.348 "uuid": "241d6e0b-00fe-47c1-b2f2-0aaac0621278", 00:21:20.348 "no_auto_visible": false 00:21:20.348 } 00:21:20.348 } 00:21:20.348 }, 00:21:20.348 { 00:21:20.348 "method": "nvmf_subsystem_add_listener", 00:21:20.348 "params": { 00:21:20.348 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.348 "listen_address": { 00:21:20.348 "trtype": "TCP", 00:21:20.348 "adrfam": "IPv4", 00:21:20.348 "traddr": "10.0.0.2", 00:21:20.348 "trsvcid": "4420" 00:21:20.348 }, 00:21:20.348 "secure_channel": true 00:21:20.348 } 00:21:20.348 } 00:21:20.348 ] 00:21:20.348 } 00:21:20.348 ] 00:21:20.348 }' 00:21:20.348 11:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:20.608 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:21:20.608 "subsystems": [ 00:21:20.608 { 00:21:20.608 "subsystem": "keyring", 00:21:20.608 "config": [] 00:21:20.608 }, 00:21:20.608 { 00:21:20.608 "subsystem": "iobuf", 00:21:20.608 "config": [ 00:21:20.608 { 00:21:20.608 "method": "iobuf_set_options", 00:21:20.608 "params": { 00:21:20.608 "small_pool_count": 8192, 00:21:20.608 "large_pool_count": 1024, 00:21:20.608 "small_bufsize": 8192, 00:21:20.608 "large_bufsize": 135168 00:21:20.608 } 00:21:20.608 } 00:21:20.608 ] 00:21:20.608 }, 00:21:20.608 { 00:21:20.608 "subsystem": "sock", 00:21:20.609 "config": [ 00:21:20.609 { 00:21:20.609 "method": "sock_set_default_impl", 00:21:20.609 "params": { 00:21:20.609 "impl_name": "posix" 00:21:20.609 } 00:21:20.609 }, 00:21:20.609 { 00:21:20.609 "method": "sock_impl_set_options", 00:21:20.609 "params": { 00:21:20.609 "impl_name": "ssl", 00:21:20.609 "recv_buf_size": 4096, 00:21:20.609 "send_buf_size": 4096, 00:21:20.609 "enable_recv_pipe": true, 00:21:20.609 "enable_quickack": false, 00:21:20.609 "enable_placement_id": 0, 00:21:20.609 "enable_zerocopy_send_server": true, 00:21:20.609 "enable_zerocopy_send_client": false, 00:21:20.609 "zerocopy_threshold": 0, 00:21:20.609 "tls_version": 0, 00:21:20.609 "enable_ktls": false 00:21:20.609 } 00:21:20.609 }, 00:21:20.609 { 00:21:20.609 "method": "sock_impl_set_options", 00:21:20.609 "params": { 00:21:20.609 "impl_name": "posix", 00:21:20.609 "recv_buf_size": 2097152, 00:21:20.609 "send_buf_size": 2097152, 00:21:20.609 "enable_recv_pipe": true, 00:21:20.609 "enable_quickack": false, 00:21:20.609 "enable_placement_id": 0, 00:21:20.609 "enable_zerocopy_send_server": true, 00:21:20.609 "enable_zerocopy_send_client": false, 00:21:20.609 "zerocopy_threshold": 0, 00:21:20.609 "tls_version": 0, 00:21:20.609 "enable_ktls": false 00:21:20.609 } 00:21:20.609 } 00:21:20.609 ] 00:21:20.609 }, 00:21:20.609 { 00:21:20.609 "subsystem": "vmd", 00:21:20.609 "config": [] 00:21:20.609 }, 00:21:20.609 { 00:21:20.609 "subsystem": "accel", 00:21:20.609 "config": [ 00:21:20.609 { 00:21:20.609 "method": "accel_set_options", 00:21:20.609 "params": { 00:21:20.609 "small_cache_size": 128, 00:21:20.609 "large_cache_size": 16, 00:21:20.609 "task_count": 2048, 00:21:20.609 "sequence_count": 2048, 00:21:20.609 "buf_count": 2048 00:21:20.609 } 00:21:20.609 } 00:21:20.609 ] 00:21:20.609 }, 00:21:20.609 { 00:21:20.609 "subsystem": "bdev", 00:21:20.609 "config": [ 00:21:20.609 { 00:21:20.609 "method": "bdev_set_options", 00:21:20.609 "params": { 00:21:20.609 "bdev_io_pool_size": 65535, 00:21:20.609 "bdev_io_cache_size": 256, 00:21:20.609 "bdev_auto_examine": true, 00:21:20.609 "iobuf_small_cache_size": 128, 00:21:20.609 "iobuf_large_cache_size": 16 00:21:20.609 } 00:21:20.609 }, 00:21:20.609 { 00:21:20.609 "method": "bdev_raid_set_options", 00:21:20.609 "params": { 00:21:20.609 "process_window_size_kb": 1024, 00:21:20.609 "process_max_bandwidth_mb_sec": 0 00:21:20.609 } 00:21:20.609 }, 00:21:20.609 { 00:21:20.609 "method": "bdev_iscsi_set_options", 00:21:20.609 "params": { 00:21:20.609 "timeout_sec": 30 00:21:20.609 } 00:21:20.609 }, 00:21:20.609 { 00:21:20.609 "method": "bdev_nvme_set_options", 00:21:20.609 "params": { 00:21:20.609 "action_on_timeout": "none", 00:21:20.609 "timeout_us": 0, 00:21:20.609 "timeout_admin_us": 0, 00:21:20.609 "keep_alive_timeout_ms": 10000, 00:21:20.609 "arbitration_burst": 0, 00:21:20.609 "low_priority_weight": 0, 00:21:20.609 "medium_priority_weight": 0, 00:21:20.609 "high_priority_weight": 0, 00:21:20.609 "nvme_adminq_poll_period_us": 10000, 00:21:20.609 "nvme_ioq_poll_period_us": 0, 00:21:20.609 "io_queue_requests": 512, 00:21:20.609 "delay_cmd_submit": true, 00:21:20.609 "transport_retry_count": 4, 00:21:20.609 "bdev_retry_count": 3, 00:21:20.609 "transport_ack_timeout": 0, 00:21:20.609 "ctrlr_loss_timeout_sec": 0, 00:21:20.609 "reconnect_delay_sec": 0, 00:21:20.609 "fast_io_fail_timeout_sec": 0, 00:21:20.609 "disable_auto_failback": false, 00:21:20.609 "generate_uuids": false, 00:21:20.609 "transport_tos": 0, 00:21:20.609 "nvme_error_stat": false, 00:21:20.609 "rdma_srq_size": 0, 00:21:20.609 "io_path_stat": false, 00:21:20.609 "allow_accel_sequence": false, 00:21:20.609 "rdma_max_cq_size": 0, 00:21:20.609 "rdma_cm_event_timeout_ms": 0, 00:21:20.609 "dhchap_digests": [ 00:21:20.609 "sha256", 00:21:20.609 "sha384", 00:21:20.609 "sha512" 00:21:20.609 ], 00:21:20.609 "dhchap_dhgroups": [ 00:21:20.609 "null", 00:21:20.609 "ffdhe2048", 00:21:20.609 "ffdhe3072", 00:21:20.609 "ffdhe4096", 00:21:20.609 "ffdhe6144", 00:21:20.609 "ffdhe8192" 00:21:20.609 ] 00:21:20.609 } 00:21:20.609 }, 00:21:20.609 { 00:21:20.609 "method": "bdev_nvme_attach_controller", 00:21:20.609 "params": { 00:21:20.609 "name": "TLSTEST", 00:21:20.609 "trtype": "TCP", 00:21:20.609 "adrfam": "IPv4", 00:21:20.609 "traddr": "10.0.0.2", 00:21:20.609 "trsvcid": "4420", 00:21:20.609 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.609 "prchk_reftag": false, 00:21:20.609 "prchk_guard": false, 00:21:20.609 "ctrlr_loss_timeout_sec": 0, 00:21:20.609 "reconnect_delay_sec": 0, 00:21:20.609 "fast_io_fail_timeout_sec": 0, 00:21:20.609 "psk": "/tmp/tmp.aPzY4adk1k", 00:21:20.609 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:20.609 "hdgst": false, 00:21:20.609 "ddgst": false 00:21:20.609 } 00:21:20.609 }, 00:21:20.609 { 00:21:20.609 "method": "bdev_nvme_set_hotplug", 00:21:20.609 "params": { 00:21:20.609 "period_us": 100000, 00:21:20.609 "enable": false 00:21:20.609 } 00:21:20.609 }, 00:21:20.609 { 00:21:20.609 "method": "bdev_wait_for_examine" 00:21:20.609 } 00:21:20.609 ] 00:21:20.609 }, 00:21:20.609 { 00:21:20.609 "subsystem": "nbd", 00:21:20.609 "config": [] 00:21:20.609 } 00:21:20.609 ] 00:21:20.609 }' 00:21:20.609 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 1487043 00:21:20.609 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1487043 ']' 00:21:20.609 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1487043 00:21:20.609 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:20.609 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:20.609 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1487043 00:21:20.870 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:20.870 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:20.870 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1487043' 00:21:20.870 killing process with pid 1487043 00:21:20.870 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1487043 00:21:20.870 Received shutdown signal, test time was about 10.000000 seconds 00:21:20.870 00:21:20.870 Latency(us) 00:21:20.870 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.870 =================================================================================================================== 00:21:20.870 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:20.870 [2024-07-26 11:09:40.113458] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:20.870 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1487043 00:21:20.870 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 1486778 00:21:20.870 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1486778 ']' 00:21:20.870 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1486778 00:21:20.870 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:20.870 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:20.870 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1486778 00:21:20.870 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:20.870 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:20.870 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1486778' 00:21:20.870 killing process with pid 1486778 00:21:20.870 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1486778 00:21:20.870 [2024-07-26 11:09:40.342830] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:20.870 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1486778 00:21:21.130 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:21.130 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:21.130 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:21.130 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:21:21.130 "subsystems": [ 00:21:21.130 { 00:21:21.130 "subsystem": "keyring", 00:21:21.130 "config": [] 00:21:21.130 }, 00:21:21.130 { 00:21:21.130 "subsystem": "iobuf", 00:21:21.130 "config": [ 00:21:21.130 { 00:21:21.130 "method": "iobuf_set_options", 00:21:21.130 "params": { 00:21:21.130 "small_pool_count": 8192, 00:21:21.130 "large_pool_count": 1024, 00:21:21.130 "small_bufsize": 8192, 00:21:21.130 "large_bufsize": 135168 00:21:21.130 } 00:21:21.130 } 00:21:21.130 ] 00:21:21.130 }, 00:21:21.130 { 00:21:21.130 "subsystem": "sock", 00:21:21.130 "config": [ 00:21:21.130 { 00:21:21.130 "method": "sock_set_default_impl", 00:21:21.130 "params": { 00:21:21.131 "impl_name": "posix" 00:21:21.131 } 00:21:21.131 }, 00:21:21.131 { 00:21:21.131 "method": "sock_impl_set_options", 00:21:21.131 "params": { 00:21:21.131 "impl_name": "ssl", 00:21:21.131 "recv_buf_size": 4096, 00:21:21.131 "send_buf_size": 4096, 00:21:21.131 "enable_recv_pipe": true, 00:21:21.131 "enable_quickack": false, 00:21:21.131 "enable_placement_id": 0, 00:21:21.131 "enable_zerocopy_send_server": true, 00:21:21.131 "enable_zerocopy_send_client": false, 00:21:21.131 "zerocopy_threshold": 0, 00:21:21.131 "tls_version": 0, 00:21:21.131 "enable_ktls": false 00:21:21.131 } 00:21:21.131 }, 00:21:21.131 { 00:21:21.131 "method": "sock_impl_set_options", 00:21:21.131 "params": { 00:21:21.131 "impl_name": "posix", 00:21:21.131 "recv_buf_size": 2097152, 00:21:21.131 "send_buf_size": 2097152, 00:21:21.131 "enable_recv_pipe": true, 00:21:21.131 "enable_quickack": false, 00:21:21.131 "enable_placement_id": 0, 00:21:21.131 "enable_zerocopy_send_server": true, 00:21:21.131 "enable_zerocopy_send_client": false, 00:21:21.131 "zerocopy_threshold": 0, 00:21:21.131 "tls_version": 0, 00:21:21.131 "enable_ktls": false 00:21:21.131 } 00:21:21.131 } 00:21:21.131 ] 00:21:21.131 }, 00:21:21.131 { 00:21:21.131 "subsystem": "vmd", 00:21:21.131 "config": [] 00:21:21.131 }, 00:21:21.131 { 00:21:21.131 "subsystem": "accel", 00:21:21.131 "config": [ 00:21:21.131 { 00:21:21.131 "method": "accel_set_options", 00:21:21.131 "params": { 00:21:21.131 "small_cache_size": 128, 00:21:21.131 "large_cache_size": 16, 00:21:21.131 "task_count": 2048, 00:21:21.131 "sequence_count": 2048, 00:21:21.131 "buf_count": 2048 00:21:21.131 } 00:21:21.131 } 00:21:21.131 ] 00:21:21.131 }, 00:21:21.131 { 00:21:21.131 "subsystem": "bdev", 00:21:21.131 "config": [ 00:21:21.131 { 00:21:21.131 "method": "bdev_set_options", 00:21:21.131 "params": { 00:21:21.131 "bdev_io_pool_size": 65535, 00:21:21.131 "bdev_io_cache_size": 256, 00:21:21.131 "bdev_auto_examine": true, 00:21:21.131 "iobuf_small_cache_size": 128, 00:21:21.131 "iobuf_large_cache_size": 16 00:21:21.131 } 00:21:21.131 }, 00:21:21.131 { 00:21:21.131 "method": "bdev_raid_set_options", 00:21:21.131 "params": { 00:21:21.131 "process_window_size_kb": 1024, 00:21:21.131 "process_max_bandwidth_mb_sec": 0 00:21:21.131 } 00:21:21.131 }, 00:21:21.131 { 00:21:21.131 "method": "bdev_iscsi_set_options", 00:21:21.131 "params": { 00:21:21.131 "timeout_sec": 30 00:21:21.131 } 00:21:21.131 }, 00:21:21.131 { 00:21:21.131 "method": "bdev_nvme_set_options", 00:21:21.131 "params": { 00:21:21.131 "action_on_timeout": "none", 00:21:21.131 "timeout_us": 0, 00:21:21.131 "timeout_admin_us": 0, 00:21:21.131 "keep_alive_timeout_ms": 10000, 00:21:21.131 "arbitration_burst": 0, 00:21:21.131 "low_priority_weight": 0, 00:21:21.131 "medium_priority_weight": 0, 00:21:21.131 "high_priority_weight": 0, 00:21:21.131 "nvme_adminq_poll_period_us": 10000, 00:21:21.131 "nvme_ioq_poll_period_us": 0, 00:21:21.131 "io_queue_requests": 0, 00:21:21.131 "delay_cmd_submit": true, 00:21:21.131 "transport_retry_count": 4, 00:21:21.131 "bdev_retry_count": 3, 00:21:21.131 "transport_ack_timeout": 0, 00:21:21.131 "ctrlr_loss_timeout_sec": 0, 00:21:21.131 "reconnect_delay_sec": 0, 00:21:21.131 "fast_io_fail_timeout_sec": 0, 00:21:21.131 "disable_auto_failback": false, 00:21:21.131 "generate_uuids": false, 00:21:21.131 "transport_tos": 0, 00:21:21.131 "nvme_error_stat": false, 00:21:21.131 "rdma_srq_size": 0, 00:21:21.131 "io_path_stat": false, 00:21:21.131 "allow_accel_sequence": false, 00:21:21.131 "rdma_max_cq_size": 0, 00:21:21.131 "rdma_cm_event_timeout_ms": 0, 00:21:21.131 "dhchap_digests": [ 00:21:21.131 "sha256", 00:21:21.131 "sha384", 00:21:21.131 "sha512" 00:21:21.131 ], 00:21:21.131 "dhchap_dhgroups": [ 00:21:21.131 "null", 00:21:21.131 "ffdhe2048", 00:21:21.131 "ffdhe3072", 00:21:21.131 "ffdhe4096", 00:21:21.131 "ffdhe6144", 00:21:21.131 "ffdhe8192" 00:21:21.131 ] 00:21:21.131 } 00:21:21.131 }, 00:21:21.131 { 00:21:21.131 "method": "bdev_nvme_set_hotplug", 00:21:21.131 "params": { 00:21:21.131 "period_us": 100000, 00:21:21.131 "enable": false 00:21:21.131 } 00:21:21.131 }, 00:21:21.131 { 00:21:21.131 "method": "bdev_malloc_create", 00:21:21.131 "params": { 00:21:21.131 "name": "malloc0", 00:21:21.131 "num_blocks": 8192, 00:21:21.131 "block_size": 4096, 00:21:21.131 "physical_block_size": 4096, 00:21:21.131 "uuid": "241d6e0b-00fe-47c1-b2f2-0aaac0621278", 00:21:21.131 "optimal_io_boundary": 0, 00:21:21.131 "md_size": 0, 00:21:21.131 "dif_type": 0, 00:21:21.131 "dif_is_head_of_md": false, 00:21:21.131 "dif_pi_format": 0 00:21:21.131 } 00:21:21.131 }, 00:21:21.131 { 00:21:21.131 "method": "bdev_wait_for_examine" 00:21:21.131 } 00:21:21.131 ] 00:21:21.131 }, 00:21:21.131 { 00:21:21.131 "subsystem": "nbd", 00:21:21.131 "config": [] 00:21:21.131 }, 00:21:21.131 { 00:21:21.131 "subsystem": "scheduler", 00:21:21.131 "config": [ 00:21:21.131 { 00:21:21.131 "method": "framework_set_scheduler", 00:21:21.131 "params": { 00:21:21.131 "name": "static" 00:21:21.131 } 00:21:21.131 } 00:21:21.131 ] 00:21:21.131 }, 00:21:21.131 { 00:21:21.131 "subsystem": "nvmf", 00:21:21.131 "config": [ 00:21:21.131 { 00:21:21.131 "method": "nvmf_set_config", 00:21:21.131 "params": { 00:21:21.131 "discovery_filter": "match_any", 00:21:21.131 "admin_cmd_passthru": { 00:21:21.131 "identify_ctrlr": false 00:21:21.131 } 00:21:21.131 } 00:21:21.131 }, 00:21:21.131 { 00:21:21.131 "method": "nvmf_set_max_subsystems", 00:21:21.131 "params": { 00:21:21.131 "max_subsystems": 1024 00:21:21.131 } 00:21:21.131 }, 00:21:21.131 { 00:21:21.131 "method": "nvmf_set_crdt", 00:21:21.131 "params": { 00:21:21.131 "crdt1": 0, 00:21:21.131 "crdt2": 0, 00:21:21.131 "crdt3": 0 00:21:21.131 } 00:21:21.131 }, 00:21:21.131 { 00:21:21.131 "method": "nvmf_create_transport", 00:21:21.131 "params": { 00:21:21.131 "trtype": "TCP", 00:21:21.131 "max_queue_depth": 128, 00:21:21.131 "max_io_qpairs_per_ctrlr": 127, 00:21:21.131 "in_capsule_data_size": 4096, 00:21:21.131 "max_io_size": 131072, 00:21:21.131 "io_unit_size": 131072, 00:21:21.131 "max_aq_depth": 128, 00:21:21.131 "num_shared_buffers": 511, 00:21:21.131 "buf_cache_size": 4294967295, 00:21:21.131 "dif_insert_or_strip": false, 00:21:21.131 "zcopy": false, 00:21:21.131 "c2h_success": false, 00:21:21.131 "sock_priority": 0, 00:21:21.131 "abort_timeout_sec": 1, 00:21:21.131 "ack_timeout": 0, 00:21:21.131 "data_wr_pool_size": 0 00:21:21.131 } 00:21:21.131 }, 00:21:21.131 { 00:21:21.131 "method": "nvmf_create_subsystem", 00:21:21.131 "params": { 00:21:21.131 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.131 "allow_any_host": false, 00:21:21.131 "serial_number": "SPDK00000000000001", 00:21:21.131 "model_number": "SPDK bdev Controller", 00:21:21.131 "max_namespaces": 10, 00:21:21.131 "min_cntlid": 1, 00:21:21.131 "max_cntlid": 65519, 00:21:21.131 "ana_reporting": false 00:21:21.131 } 00:21:21.131 }, 00:21:21.131 { 00:21:21.131 "method": "nvmf_subsystem_add_host", 00:21:21.131 "params": { 00:21:21.131 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.132 "host": "nqn.2016-06.io.spdk:host1", 00:21:21.132 "psk": "/tmp/tmp.aPzY4adk1k" 00:21:21.132 } 00:21:21.132 }, 00:21:21.132 { 00:21:21.132 "method": "nvmf_subsystem_add_ns", 00:21:21.132 "params": { 00:21:21.132 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.132 "namespace": { 00:21:21.132 "nsid": 1, 00:21:21.132 "bdev_name": "malloc0", 00:21:21.132 "nguid": "241D6E0B00FE47C1B2F20AAAC0621278", 00:21:21.132 "uuid": "241d6e0b-00fe-47c1-b2f2-0aaac0621278", 00:21:21.132 "no_auto_visible": false 00:21:21.132 } 00:21:21.132 } 00:21:21.132 }, 00:21:21.132 { 00:21:21.132 "method": "nvmf_subsystem_add_listener", 00:21:21.132 "params": { 00:21:21.132 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.132 "listen_address": { 00:21:21.132 "trtype": "TCP", 00:21:21.132 "adrfam": "IPv4", 00:21:21.132 "traddr": "10.0.0.2", 00:21:21.132 "trsvcid": "4420" 00:21:21.132 }, 00:21:21.132 "secure_channel": true 00:21:21.132 } 00:21:21.132 } 00:21:21.132 ] 00:21:21.132 } 00:21:21.132 ] 00:21:21.132 }' 00:21:21.132 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.132 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1487509 00:21:21.132 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:21.132 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1487509 00:21:21.132 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1487509 ']' 00:21:21.132 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.132 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:21.132 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.132 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:21.132 11:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.132 [2024-07-26 11:09:40.593915] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:21.132 [2024-07-26 11:09:40.593959] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:21.132 EAL: No free 2048 kB hugepages reported on node 1 00:21:21.392 [2024-07-26 11:09:40.649487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.392 [2024-07-26 11:09:40.727736] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:21.392 [2024-07-26 11:09:40.727771] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:21.392 [2024-07-26 11:09:40.727779] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:21.392 [2024-07-26 11:09:40.727785] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:21.392 [2024-07-26 11:09:40.727790] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:21.392 [2024-07-26 11:09:40.727836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:21.653 [2024-07-26 11:09:40.929867] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:21.653 [2024-07-26 11:09:40.951755] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:21.653 [2024-07-26 11:09:40.967803] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:21.653 [2024-07-26 11:09:40.967971] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:21.913 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:21.913 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:21.913 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:21.913 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:21.913 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:22.174 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:22.174 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1487615 00:21:22.174 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1487615 /var/tmp/bdevperf.sock 00:21:22.174 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1487615 ']' 00:21:22.174 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:22.174 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:22.174 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:22.174 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:22.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:22.174 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:21:22.174 "subsystems": [ 00:21:22.174 { 00:21:22.174 "subsystem": "keyring", 00:21:22.174 "config": [] 00:21:22.174 }, 00:21:22.174 { 00:21:22.174 "subsystem": "iobuf", 00:21:22.174 "config": [ 00:21:22.174 { 00:21:22.174 "method": "iobuf_set_options", 00:21:22.174 "params": { 00:21:22.174 "small_pool_count": 8192, 00:21:22.174 "large_pool_count": 1024, 00:21:22.174 "small_bufsize": 8192, 00:21:22.174 "large_bufsize": 135168 00:21:22.174 } 00:21:22.174 } 00:21:22.174 ] 00:21:22.174 }, 00:21:22.174 { 00:21:22.174 "subsystem": "sock", 00:21:22.174 "config": [ 00:21:22.174 { 00:21:22.174 "method": "sock_set_default_impl", 00:21:22.174 "params": { 00:21:22.174 "impl_name": "posix" 00:21:22.174 } 00:21:22.174 }, 00:21:22.174 { 00:21:22.174 "method": "sock_impl_set_options", 00:21:22.174 "params": { 00:21:22.174 "impl_name": "ssl", 00:21:22.174 "recv_buf_size": 4096, 00:21:22.174 "send_buf_size": 4096, 00:21:22.174 "enable_recv_pipe": true, 00:21:22.174 "enable_quickack": false, 00:21:22.174 "enable_placement_id": 0, 00:21:22.174 "enable_zerocopy_send_server": true, 00:21:22.174 "enable_zerocopy_send_client": false, 00:21:22.174 "zerocopy_threshold": 0, 00:21:22.174 "tls_version": 0, 00:21:22.174 "enable_ktls": false 00:21:22.174 } 00:21:22.174 }, 00:21:22.174 { 00:21:22.174 "method": "sock_impl_set_options", 00:21:22.175 "params": { 00:21:22.175 "impl_name": "posix", 00:21:22.175 "recv_buf_size": 2097152, 00:21:22.175 "send_buf_size": 2097152, 00:21:22.175 "enable_recv_pipe": true, 00:21:22.175 "enable_quickack": false, 00:21:22.175 "enable_placement_id": 0, 00:21:22.175 "enable_zerocopy_send_server": true, 00:21:22.175 "enable_zerocopy_send_client": false, 00:21:22.175 "zerocopy_threshold": 0, 00:21:22.175 "tls_version": 0, 00:21:22.175 "enable_ktls": false 00:21:22.175 } 00:21:22.175 } 00:21:22.175 ] 00:21:22.175 }, 00:21:22.175 { 00:21:22.175 "subsystem": "vmd", 00:21:22.175 "config": [] 00:21:22.175 }, 00:21:22.175 { 00:21:22.175 "subsystem": "accel", 00:21:22.175 "config": [ 00:21:22.175 { 00:21:22.175 "method": "accel_set_options", 00:21:22.175 "params": { 00:21:22.175 "small_cache_size": 128, 00:21:22.175 "large_cache_size": 16, 00:21:22.175 "task_count": 2048, 00:21:22.175 "sequence_count": 2048, 00:21:22.175 "buf_count": 2048 00:21:22.175 } 00:21:22.175 } 00:21:22.175 ] 00:21:22.175 }, 00:21:22.175 { 00:21:22.175 "subsystem": "bdev", 00:21:22.175 "config": [ 00:21:22.175 { 00:21:22.175 "method": "bdev_set_options", 00:21:22.175 "params": { 00:21:22.175 "bdev_io_pool_size": 65535, 00:21:22.175 "bdev_io_cache_size": 256, 00:21:22.175 "bdev_auto_examine": true, 00:21:22.175 "iobuf_small_cache_size": 128, 00:21:22.175 "iobuf_large_cache_size": 16 00:21:22.175 } 00:21:22.175 }, 00:21:22.175 { 00:21:22.175 "method": "bdev_raid_set_options", 00:21:22.175 "params": { 00:21:22.175 "process_window_size_kb": 1024, 00:21:22.175 "process_max_bandwidth_mb_sec": 0 00:21:22.175 } 00:21:22.175 }, 00:21:22.175 { 00:21:22.175 "method": "bdev_iscsi_set_options", 00:21:22.175 "params": { 00:21:22.175 "timeout_sec": 30 00:21:22.175 } 00:21:22.175 }, 00:21:22.175 { 00:21:22.175 "method": "bdev_nvme_set_options", 00:21:22.175 "params": { 00:21:22.175 "action_on_timeout": "none", 00:21:22.175 "timeout_us": 0, 00:21:22.175 "timeout_admin_us": 0, 00:21:22.175 "keep_alive_timeout_ms": 10000, 00:21:22.175 "arbitration_burst": 0, 00:21:22.175 "low_priority_weight": 0, 00:21:22.175 "medium_priority_weight": 0, 00:21:22.175 "high_priority_weight": 0, 00:21:22.175 "nvme_adminq_poll_period_us": 10000, 00:21:22.175 "nvme_ioq_poll_period_us": 0, 00:21:22.175 "io_queue_requests": 512, 00:21:22.175 "delay_cmd_submit": true, 00:21:22.175 "transport_retry_count": 4, 00:21:22.175 "bdev_retry_count": 3, 00:21:22.175 "transport_ack_timeout": 0, 00:21:22.175 "ctrlr_loss_timeout_sec": 0, 00:21:22.175 "reconnect_delay_sec": 0, 00:21:22.175 "fast_io_fail_timeout_sec": 0, 00:21:22.175 "disable_auto_failback": false, 00:21:22.175 "generate_uuids": false, 00:21:22.175 "transport_tos": 0, 00:21:22.175 "nvme_error_stat": false, 00:21:22.175 "rdma_srq_size": 0, 00:21:22.175 "io_path_stat": false, 00:21:22.175 "allow_accel_sequence": false, 00:21:22.175 "rdma_max_cq_size": 0, 00:21:22.175 "rdma_cm_event_timeout_ms": 0, 00:21:22.175 "dhchap_digests": [ 00:21:22.175 "sha256", 00:21:22.175 "sha384", 00:21:22.175 "sha512" 00:21:22.175 ], 00:21:22.175 "dhchap_dhgroups": [ 00:21:22.175 "null", 00:21:22.175 "ffdhe2048", 00:21:22.175 "ffdhe3072", 00:21:22.175 "ffdhe4096", 00:21:22.175 "ffdhe6144", 00:21:22.175 "ffdhe8192" 00:21:22.175 ] 00:21:22.175 } 00:21:22.175 }, 00:21:22.175 { 00:21:22.175 "method": "bdev_nvme_attach_controller", 00:21:22.175 "params": { 00:21:22.175 "name": "TLSTEST", 00:21:22.175 "trtype": "TCP", 00:21:22.175 "adrfam": "IPv4", 00:21:22.175 "traddr": "10.0.0.2", 00:21:22.175 "trsvcid": "4420", 00:21:22.175 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.175 "prchk_reftag": false, 00:21:22.175 "prchk_guard": false, 00:21:22.175 "ctrlr_loss_timeout_sec": 0, 00:21:22.175 "reconnect_delay_sec": 0, 00:21:22.175 "fast_io_fail_timeout_sec": 0, 00:21:22.175 "psk": "/tmp/tmp.aPzY4adk1k", 00:21:22.175 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:22.175 "hdgst": false, 00:21:22.175 "ddgst": false 00:21:22.175 } 00:21:22.175 }, 00:21:22.175 { 00:21:22.175 "method": "bdev_nvme_set_hotplug", 00:21:22.175 "params": { 00:21:22.175 "period_us": 100000, 00:21:22.175 "enable": false 00:21:22.175 } 00:21:22.175 }, 00:21:22.175 { 00:21:22.175 "method": "bdev_wait_for_examine" 00:21:22.175 } 00:21:22.175 ] 00:21:22.175 }, 00:21:22.175 { 00:21:22.175 "subsystem": "nbd", 00:21:22.175 "config": [] 00:21:22.175 } 00:21:22.175 ] 00:21:22.175 }' 00:21:22.175 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:22.175 11:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:22.175 [2024-07-26 11:09:41.473504] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:22.175 [2024-07-26 11:09:41.473549] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1487615 ] 00:21:22.175 EAL: No free 2048 kB hugepages reported on node 1 00:21:22.175 [2024-07-26 11:09:41.522271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.175 [2024-07-26 11:09:41.593145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:22.436 [2024-07-26 11:09:41.736352] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:22.436 [2024-07-26 11:09:41.736428] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:23.006 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:23.006 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:23.006 11:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:23.006 Running I/O for 10 seconds... 00:21:35.224 00:21:35.224 Latency(us) 00:21:35.224 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:35.224 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:35.224 Verification LBA range: start 0x0 length 0x2000 00:21:35.224 TLSTESTn1 : 10.10 1134.49 4.43 0.00 0.00 112399.71 7123.48 158654.11 00:21:35.224 =================================================================================================================== 00:21:35.224 Total : 1134.49 4.43 0.00 0.00 112399.71 7123.48 158654.11 00:21:35.224 0 00:21:35.224 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:35.224 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 1487615 00:21:35.224 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1487615 ']' 00:21:35.224 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1487615 00:21:35.224 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:35.224 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:35.224 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1487615 00:21:35.224 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:35.224 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:35.224 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1487615' 00:21:35.224 killing process with pid 1487615 00:21:35.224 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1487615 00:21:35.224 Received shutdown signal, test time was about 10.000000 seconds 00:21:35.224 00:21:35.224 Latency(us) 00:21:35.224 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:35.224 =================================================================================================================== 00:21:35.224 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:35.224 [2024-07-26 11:09:52.572652] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:35.224 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1487615 00:21:35.224 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 1487509 00:21:35.224 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1487509 ']' 00:21:35.224 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1487509 00:21:35.224 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:35.224 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:35.224 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1487509 00:21:35.224 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:35.224 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:35.224 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1487509' 00:21:35.224 killing process with pid 1487509 00:21:35.224 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1487509 00:21:35.224 [2024-07-26 11:09:52.803795] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:35.224 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1487509 00:21:35.224 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:21:35.224 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:35.224 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:35.224 11:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.224 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1489594 00:21:35.224 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1489594 00:21:35.224 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:35.224 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1489594 ']' 00:21:35.224 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.224 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:35.224 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.224 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:35.224 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.224 [2024-07-26 11:09:53.047461] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:35.224 [2024-07-26 11:09:53.047507] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.224 EAL: No free 2048 kB hugepages reported on node 1 00:21:35.224 [2024-07-26 11:09:53.103924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.224 [2024-07-26 11:09:53.182241] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:35.224 [2024-07-26 11:09:53.182276] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:35.224 [2024-07-26 11:09:53.182283] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:35.224 [2024-07-26 11:09:53.182290] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:35.224 [2024-07-26 11:09:53.182294] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:35.224 [2024-07-26 11:09:53.182311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.224 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:35.224 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:35.224 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:35.224 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:35.224 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.224 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:35.224 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.aPzY4adk1k 00:21:35.224 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.aPzY4adk1k 00:21:35.225 11:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:35.225 [2024-07-26 11:09:54.041998] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:35.225 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:35.225 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:35.225 [2024-07-26 11:09:54.390936] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:35.225 [2024-07-26 11:09:54.391136] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:35.225 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:35.225 malloc0 00:21:35.225 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:35.485 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aPzY4adk1k 00:21:35.485 [2024-07-26 11:09:54.868392] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:35.485 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1489858 00:21:35.485 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:35.485 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:35.485 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1489858 /var/tmp/bdevperf.sock 00:21:35.485 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1489858 ']' 00:21:35.485 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:35.485 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:35.485 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:35.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:35.485 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:35.485 11:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.485 [2024-07-26 11:09:54.924451] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:35.485 [2024-07-26 11:09:54.924497] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1489858 ] 00:21:35.485 EAL: No free 2048 kB hugepages reported on node 1 00:21:35.485 [2024-07-26 11:09:54.977643] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.745 [2024-07-26 11:09:55.049695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:36.315 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:36.315 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:36.315 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.aPzY4adk1k 00:21:36.574 11:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:36.574 [2024-07-26 11:09:56.037727] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:36.857 nvme0n1 00:21:36.857 11:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:36.857 Running I/O for 1 seconds... 00:21:38.236 00:21:38.236 Latency(us) 00:21:38.236 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.236 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:38.236 Verification LBA range: start 0x0 length 0x2000 00:21:38.236 nvme0n1 : 1.07 908.94 3.55 0.00 0.00 137560.28 6382.64 192390.90 00:21:38.236 =================================================================================================================== 00:21:38.236 Total : 908.94 3.55 0.00 0.00 137560.28 6382.64 192390.90 00:21:38.236 0 00:21:38.236 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 1489858 00:21:38.236 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1489858 ']' 00:21:38.236 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1489858 00:21:38.236 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:38.236 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:38.236 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1489858 00:21:38.236 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:38.236 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:38.236 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1489858' 00:21:38.236 killing process with pid 1489858 00:21:38.236 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1489858 00:21:38.236 Received shutdown signal, test time was about 1.000000 seconds 00:21:38.236 00:21:38.236 Latency(us) 00:21:38.236 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.236 =================================================================================================================== 00:21:38.236 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:38.236 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1489858 00:21:38.236 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 1489594 00:21:38.236 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1489594 ']' 00:21:38.236 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1489594 00:21:38.236 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:38.236 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:38.236 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1489594 00:21:38.236 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:38.236 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:38.236 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1489594' 00:21:38.236 killing process with pid 1489594 00:21:38.236 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1489594 00:21:38.236 [2024-07-26 11:09:57.600433] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:38.236 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1489594 00:21:38.496 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:21:38.496 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:38.496 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:38.496 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:38.496 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1490334 00:21:38.496 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:38.496 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1490334 00:21:38.496 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1490334 ']' 00:21:38.496 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.496 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:38.496 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.496 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:38.496 11:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:38.496 [2024-07-26 11:09:57.837764] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:38.496 [2024-07-26 11:09:57.837810] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:38.496 EAL: No free 2048 kB hugepages reported on node 1 00:21:38.496 [2024-07-26 11:09:57.893356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.496 [2024-07-26 11:09:57.972337] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:38.496 [2024-07-26 11:09:57.972371] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:38.496 [2024-07-26 11:09:57.972378] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:38.496 [2024-07-26 11:09:57.972385] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:38.496 [2024-07-26 11:09:57.972390] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:38.496 [2024-07-26 11:09:57.972406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.518 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:39.518 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:39.518 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:39.518 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:39.518 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:39.518 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:39.518 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:21:39.518 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.518 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:39.518 [2024-07-26 11:09:58.683952] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:39.518 malloc0 00:21:39.518 [2024-07-26 11:09:58.712260] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:39.518 [2024-07-26 11:09:58.723375] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:39.518 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.518 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=1490576 00:21:39.518 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 1490576 /var/tmp/bdevperf.sock 00:21:39.518 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:39.518 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1490576 ']' 00:21:39.518 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:39.518 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:39.518 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:39.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:39.518 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:39.518 11:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:39.518 [2024-07-26 11:09:58.793823] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:39.519 [2024-07-26 11:09:58.793863] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1490576 ] 00:21:39.519 EAL: No free 2048 kB hugepages reported on node 1 00:21:39.519 [2024-07-26 11:09:58.846802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.519 [2024-07-26 11:09:58.919315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:40.459 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:40.459 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:40.459 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.aPzY4adk1k 00:21:40.459 11:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:40.459 [2024-07-26 11:09:59.923515] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:40.718 nvme0n1 00:21:40.719 11:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:40.719 Running I/O for 1 seconds... 00:21:42.099 00:21:42.099 Latency(us) 00:21:42.099 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.099 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:42.099 Verification LBA range: start 0x0 length 0x2000 00:21:42.100 nvme0n1 : 1.07 938.80 3.67 0.00 0.00 133310.07 7237.45 198773.54 00:21:42.100 =================================================================================================================== 00:21:42.100 Total : 938.80 3.67 0.00 0.00 133310.07 7237.45 198773.54 00:21:42.100 0 00:21:42.100 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:21:42.100 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.100 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:42.100 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.100 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:21:42.100 "subsystems": [ 00:21:42.100 { 00:21:42.100 "subsystem": "keyring", 00:21:42.100 "config": [ 00:21:42.100 { 00:21:42.100 "method": "keyring_file_add_key", 00:21:42.100 "params": { 00:21:42.100 "name": "key0", 00:21:42.100 "path": "/tmp/tmp.aPzY4adk1k" 00:21:42.100 } 00:21:42.100 } 00:21:42.100 ] 00:21:42.100 }, 00:21:42.100 { 00:21:42.100 "subsystem": "iobuf", 00:21:42.100 "config": [ 00:21:42.100 { 00:21:42.100 "method": "iobuf_set_options", 00:21:42.100 "params": { 00:21:42.100 "small_pool_count": 8192, 00:21:42.100 "large_pool_count": 1024, 00:21:42.100 "small_bufsize": 8192, 00:21:42.100 "large_bufsize": 135168 00:21:42.100 } 00:21:42.100 } 00:21:42.100 ] 00:21:42.100 }, 00:21:42.100 { 00:21:42.100 "subsystem": "sock", 00:21:42.100 "config": [ 00:21:42.100 { 00:21:42.100 "method": "sock_set_default_impl", 00:21:42.100 "params": { 00:21:42.100 "impl_name": "posix" 00:21:42.100 } 00:21:42.100 }, 00:21:42.100 { 00:21:42.100 "method": "sock_impl_set_options", 00:21:42.100 "params": { 00:21:42.100 "impl_name": "ssl", 00:21:42.100 "recv_buf_size": 4096, 00:21:42.100 "send_buf_size": 4096, 00:21:42.100 "enable_recv_pipe": true, 00:21:42.100 "enable_quickack": false, 00:21:42.100 "enable_placement_id": 0, 00:21:42.100 "enable_zerocopy_send_server": true, 00:21:42.100 "enable_zerocopy_send_client": false, 00:21:42.100 "zerocopy_threshold": 0, 00:21:42.100 "tls_version": 0, 00:21:42.100 "enable_ktls": false 00:21:42.100 } 00:21:42.100 }, 00:21:42.100 { 00:21:42.100 "method": "sock_impl_set_options", 00:21:42.100 "params": { 00:21:42.100 "impl_name": "posix", 00:21:42.100 "recv_buf_size": 2097152, 00:21:42.100 "send_buf_size": 2097152, 00:21:42.100 "enable_recv_pipe": true, 00:21:42.100 "enable_quickack": false, 00:21:42.100 "enable_placement_id": 0, 00:21:42.100 "enable_zerocopy_send_server": true, 00:21:42.100 "enable_zerocopy_send_client": false, 00:21:42.100 "zerocopy_threshold": 0, 00:21:42.100 "tls_version": 0, 00:21:42.100 "enable_ktls": false 00:21:42.100 } 00:21:42.100 } 00:21:42.100 ] 00:21:42.100 }, 00:21:42.100 { 00:21:42.100 "subsystem": "vmd", 00:21:42.100 "config": [] 00:21:42.100 }, 00:21:42.100 { 00:21:42.100 "subsystem": "accel", 00:21:42.100 "config": [ 00:21:42.100 { 00:21:42.100 "method": "accel_set_options", 00:21:42.100 "params": { 00:21:42.100 "small_cache_size": 128, 00:21:42.100 "large_cache_size": 16, 00:21:42.100 "task_count": 2048, 00:21:42.100 "sequence_count": 2048, 00:21:42.100 "buf_count": 2048 00:21:42.100 } 00:21:42.100 } 00:21:42.100 ] 00:21:42.100 }, 00:21:42.100 { 00:21:42.100 "subsystem": "bdev", 00:21:42.100 "config": [ 00:21:42.100 { 00:21:42.100 "method": "bdev_set_options", 00:21:42.100 "params": { 00:21:42.100 "bdev_io_pool_size": 65535, 00:21:42.100 "bdev_io_cache_size": 256, 00:21:42.100 "bdev_auto_examine": true, 00:21:42.100 "iobuf_small_cache_size": 128, 00:21:42.100 "iobuf_large_cache_size": 16 00:21:42.100 } 00:21:42.100 }, 00:21:42.100 { 00:21:42.100 "method": "bdev_raid_set_options", 00:21:42.100 "params": { 00:21:42.100 "process_window_size_kb": 1024, 00:21:42.100 "process_max_bandwidth_mb_sec": 0 00:21:42.100 } 00:21:42.100 }, 00:21:42.100 { 00:21:42.100 "method": "bdev_iscsi_set_options", 00:21:42.100 "params": { 00:21:42.100 "timeout_sec": 30 00:21:42.100 } 00:21:42.100 }, 00:21:42.100 { 00:21:42.100 "method": "bdev_nvme_set_options", 00:21:42.100 "params": { 00:21:42.100 "action_on_timeout": "none", 00:21:42.100 "timeout_us": 0, 00:21:42.100 "timeout_admin_us": 0, 00:21:42.100 "keep_alive_timeout_ms": 10000, 00:21:42.100 "arbitration_burst": 0, 00:21:42.100 "low_priority_weight": 0, 00:21:42.100 "medium_priority_weight": 0, 00:21:42.100 "high_priority_weight": 0, 00:21:42.100 "nvme_adminq_poll_period_us": 10000, 00:21:42.100 "nvme_ioq_poll_period_us": 0, 00:21:42.100 "io_queue_requests": 0, 00:21:42.100 "delay_cmd_submit": true, 00:21:42.100 "transport_retry_count": 4, 00:21:42.100 "bdev_retry_count": 3, 00:21:42.100 "transport_ack_timeout": 0, 00:21:42.100 "ctrlr_loss_timeout_sec": 0, 00:21:42.100 "reconnect_delay_sec": 0, 00:21:42.100 "fast_io_fail_timeout_sec": 0, 00:21:42.100 "disable_auto_failback": false, 00:21:42.100 "generate_uuids": false, 00:21:42.100 "transport_tos": 0, 00:21:42.100 "nvme_error_stat": false, 00:21:42.100 "rdma_srq_size": 0, 00:21:42.100 "io_path_stat": false, 00:21:42.100 "allow_accel_sequence": false, 00:21:42.100 "rdma_max_cq_size": 0, 00:21:42.100 "rdma_cm_event_timeout_ms": 0, 00:21:42.100 "dhchap_digests": [ 00:21:42.100 "sha256", 00:21:42.100 "sha384", 00:21:42.100 "sha512" 00:21:42.100 ], 00:21:42.100 "dhchap_dhgroups": [ 00:21:42.100 "null", 00:21:42.100 "ffdhe2048", 00:21:42.100 "ffdhe3072", 00:21:42.100 "ffdhe4096", 00:21:42.100 "ffdhe6144", 00:21:42.100 "ffdhe8192" 00:21:42.100 ] 00:21:42.100 } 00:21:42.100 }, 00:21:42.100 { 00:21:42.100 "method": "bdev_nvme_set_hotplug", 00:21:42.100 "params": { 00:21:42.100 "period_us": 100000, 00:21:42.100 "enable": false 00:21:42.100 } 00:21:42.100 }, 00:21:42.100 { 00:21:42.100 "method": "bdev_malloc_create", 00:21:42.100 "params": { 00:21:42.100 "name": "malloc0", 00:21:42.100 "num_blocks": 8192, 00:21:42.100 "block_size": 4096, 00:21:42.100 "physical_block_size": 4096, 00:21:42.100 "uuid": "0f661775-98ff-4522-846b-6e1fdfec9bad", 00:21:42.100 "optimal_io_boundary": 0, 00:21:42.100 "md_size": 0, 00:21:42.100 "dif_type": 0, 00:21:42.100 "dif_is_head_of_md": false, 00:21:42.100 "dif_pi_format": 0 00:21:42.100 } 00:21:42.100 }, 00:21:42.100 { 00:21:42.100 "method": "bdev_wait_for_examine" 00:21:42.100 } 00:21:42.100 ] 00:21:42.100 }, 00:21:42.100 { 00:21:42.100 "subsystem": "nbd", 00:21:42.100 "config": [] 00:21:42.100 }, 00:21:42.100 { 00:21:42.100 "subsystem": "scheduler", 00:21:42.100 "config": [ 00:21:42.100 { 00:21:42.100 "method": "framework_set_scheduler", 00:21:42.100 "params": { 00:21:42.100 "name": "static" 00:21:42.100 } 00:21:42.100 } 00:21:42.100 ] 00:21:42.100 }, 00:21:42.100 { 00:21:42.100 "subsystem": "nvmf", 00:21:42.100 "config": [ 00:21:42.100 { 00:21:42.100 "method": "nvmf_set_config", 00:21:42.100 "params": { 00:21:42.100 "discovery_filter": "match_any", 00:21:42.100 "admin_cmd_passthru": { 00:21:42.100 "identify_ctrlr": false 00:21:42.100 } 00:21:42.100 } 00:21:42.100 }, 00:21:42.101 { 00:21:42.101 "method": "nvmf_set_max_subsystems", 00:21:42.101 "params": { 00:21:42.101 "max_subsystems": 1024 00:21:42.101 } 00:21:42.101 }, 00:21:42.101 { 00:21:42.101 "method": "nvmf_set_crdt", 00:21:42.101 "params": { 00:21:42.101 "crdt1": 0, 00:21:42.101 "crdt2": 0, 00:21:42.101 "crdt3": 0 00:21:42.101 } 00:21:42.101 }, 00:21:42.101 { 00:21:42.101 "method": "nvmf_create_transport", 00:21:42.101 "params": { 00:21:42.101 "trtype": "TCP", 00:21:42.101 "max_queue_depth": 128, 00:21:42.101 "max_io_qpairs_per_ctrlr": 127, 00:21:42.101 "in_capsule_data_size": 4096, 00:21:42.101 "max_io_size": 131072, 00:21:42.101 "io_unit_size": 131072, 00:21:42.101 "max_aq_depth": 128, 00:21:42.101 "num_shared_buffers": 511, 00:21:42.101 "buf_cache_size": 4294967295, 00:21:42.101 "dif_insert_or_strip": false, 00:21:42.101 "zcopy": false, 00:21:42.101 "c2h_success": false, 00:21:42.101 "sock_priority": 0, 00:21:42.101 "abort_timeout_sec": 1, 00:21:42.101 "ack_timeout": 0, 00:21:42.101 "data_wr_pool_size": 0 00:21:42.101 } 00:21:42.101 }, 00:21:42.101 { 00:21:42.101 "method": "nvmf_create_subsystem", 00:21:42.101 "params": { 00:21:42.101 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:42.101 "allow_any_host": false, 00:21:42.101 "serial_number": "00000000000000000000", 00:21:42.101 "model_number": "SPDK bdev Controller", 00:21:42.101 "max_namespaces": 32, 00:21:42.101 "min_cntlid": 1, 00:21:42.101 "max_cntlid": 65519, 00:21:42.101 "ana_reporting": false 00:21:42.101 } 00:21:42.101 }, 00:21:42.101 { 00:21:42.101 "method": "nvmf_subsystem_add_host", 00:21:42.101 "params": { 00:21:42.101 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:42.101 "host": "nqn.2016-06.io.spdk:host1", 00:21:42.101 "psk": "key0" 00:21:42.101 } 00:21:42.101 }, 00:21:42.101 { 00:21:42.101 "method": "nvmf_subsystem_add_ns", 00:21:42.101 "params": { 00:21:42.101 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:42.101 "namespace": { 00:21:42.101 "nsid": 1, 00:21:42.101 "bdev_name": "malloc0", 00:21:42.101 "nguid": "0F66177598FF4522846B6E1FDFEC9BAD", 00:21:42.101 "uuid": "0f661775-98ff-4522-846b-6e1fdfec9bad", 00:21:42.101 "no_auto_visible": false 00:21:42.101 } 00:21:42.101 } 00:21:42.101 }, 00:21:42.101 { 00:21:42.101 "method": "nvmf_subsystem_add_listener", 00:21:42.101 "params": { 00:21:42.101 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:42.101 "listen_address": { 00:21:42.101 "trtype": "TCP", 00:21:42.101 "adrfam": "IPv4", 00:21:42.101 "traddr": "10.0.0.2", 00:21:42.101 "trsvcid": "4420" 00:21:42.101 }, 00:21:42.101 "secure_channel": false, 00:21:42.101 "sock_impl": "ssl" 00:21:42.101 } 00:21:42.101 } 00:21:42.101 ] 00:21:42.101 } 00:21:42.101 ] 00:21:42.101 }' 00:21:42.101 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:42.101 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:21:42.101 "subsystems": [ 00:21:42.101 { 00:21:42.101 "subsystem": "keyring", 00:21:42.101 "config": [ 00:21:42.101 { 00:21:42.101 "method": "keyring_file_add_key", 00:21:42.101 "params": { 00:21:42.101 "name": "key0", 00:21:42.101 "path": "/tmp/tmp.aPzY4adk1k" 00:21:42.101 } 00:21:42.101 } 00:21:42.101 ] 00:21:42.101 }, 00:21:42.101 { 00:21:42.101 "subsystem": "iobuf", 00:21:42.101 "config": [ 00:21:42.101 { 00:21:42.101 "method": "iobuf_set_options", 00:21:42.101 "params": { 00:21:42.101 "small_pool_count": 8192, 00:21:42.101 "large_pool_count": 1024, 00:21:42.101 "small_bufsize": 8192, 00:21:42.101 "large_bufsize": 135168 00:21:42.101 } 00:21:42.101 } 00:21:42.101 ] 00:21:42.101 }, 00:21:42.101 { 00:21:42.101 "subsystem": "sock", 00:21:42.101 "config": [ 00:21:42.101 { 00:21:42.101 "method": "sock_set_default_impl", 00:21:42.101 "params": { 00:21:42.101 "impl_name": "posix" 00:21:42.101 } 00:21:42.101 }, 00:21:42.101 { 00:21:42.101 "method": "sock_impl_set_options", 00:21:42.101 "params": { 00:21:42.101 "impl_name": "ssl", 00:21:42.101 "recv_buf_size": 4096, 00:21:42.101 "send_buf_size": 4096, 00:21:42.101 "enable_recv_pipe": true, 00:21:42.101 "enable_quickack": false, 00:21:42.101 "enable_placement_id": 0, 00:21:42.101 "enable_zerocopy_send_server": true, 00:21:42.101 "enable_zerocopy_send_client": false, 00:21:42.101 "zerocopy_threshold": 0, 00:21:42.101 "tls_version": 0, 00:21:42.101 "enable_ktls": false 00:21:42.101 } 00:21:42.101 }, 00:21:42.101 { 00:21:42.101 "method": "sock_impl_set_options", 00:21:42.101 "params": { 00:21:42.101 "impl_name": "posix", 00:21:42.101 "recv_buf_size": 2097152, 00:21:42.101 "send_buf_size": 2097152, 00:21:42.101 "enable_recv_pipe": true, 00:21:42.101 "enable_quickack": false, 00:21:42.101 "enable_placement_id": 0, 00:21:42.101 "enable_zerocopy_send_server": true, 00:21:42.101 "enable_zerocopy_send_client": false, 00:21:42.101 "zerocopy_threshold": 0, 00:21:42.101 "tls_version": 0, 00:21:42.101 "enable_ktls": false 00:21:42.101 } 00:21:42.101 } 00:21:42.101 ] 00:21:42.101 }, 00:21:42.101 { 00:21:42.101 "subsystem": "vmd", 00:21:42.101 "config": [] 00:21:42.101 }, 00:21:42.101 { 00:21:42.101 "subsystem": "accel", 00:21:42.101 "config": [ 00:21:42.101 { 00:21:42.101 "method": "accel_set_options", 00:21:42.101 "params": { 00:21:42.101 "small_cache_size": 128, 00:21:42.101 "large_cache_size": 16, 00:21:42.101 "task_count": 2048, 00:21:42.101 "sequence_count": 2048, 00:21:42.101 "buf_count": 2048 00:21:42.101 } 00:21:42.101 } 00:21:42.101 ] 00:21:42.101 }, 00:21:42.101 { 00:21:42.101 "subsystem": "bdev", 00:21:42.101 "config": [ 00:21:42.101 { 00:21:42.101 "method": "bdev_set_options", 00:21:42.101 "params": { 00:21:42.101 "bdev_io_pool_size": 65535, 00:21:42.101 "bdev_io_cache_size": 256, 00:21:42.101 "bdev_auto_examine": true, 00:21:42.101 "iobuf_small_cache_size": 128, 00:21:42.101 "iobuf_large_cache_size": 16 00:21:42.101 } 00:21:42.101 }, 00:21:42.101 { 00:21:42.101 "method": "bdev_raid_set_options", 00:21:42.101 "params": { 00:21:42.101 "process_window_size_kb": 1024, 00:21:42.101 "process_max_bandwidth_mb_sec": 0 00:21:42.101 } 00:21:42.101 }, 00:21:42.101 { 00:21:42.101 "method": "bdev_iscsi_set_options", 00:21:42.101 "params": { 00:21:42.101 "timeout_sec": 30 00:21:42.101 } 00:21:42.101 }, 00:21:42.101 { 00:21:42.101 "method": "bdev_nvme_set_options", 00:21:42.101 "params": { 00:21:42.101 "action_on_timeout": "none", 00:21:42.101 "timeout_us": 0, 00:21:42.101 "timeout_admin_us": 0, 00:21:42.101 "keep_alive_timeout_ms": 10000, 00:21:42.101 "arbitration_burst": 0, 00:21:42.101 "low_priority_weight": 0, 00:21:42.101 "medium_priority_weight": 0, 00:21:42.101 "high_priority_weight": 0, 00:21:42.101 "nvme_adminq_poll_period_us": 10000, 00:21:42.101 "nvme_ioq_poll_period_us": 0, 00:21:42.101 "io_queue_requests": 512, 00:21:42.101 "delay_cmd_submit": true, 00:21:42.101 "transport_retry_count": 4, 00:21:42.101 "bdev_retry_count": 3, 00:21:42.101 "transport_ack_timeout": 0, 00:21:42.101 "ctrlr_loss_timeout_sec": 0, 00:21:42.101 "reconnect_delay_sec": 0, 00:21:42.101 "fast_io_fail_timeout_sec": 0, 00:21:42.101 "disable_auto_failback": false, 00:21:42.101 "generate_uuids": false, 00:21:42.101 "transport_tos": 0, 00:21:42.101 "nvme_error_stat": false, 00:21:42.101 "rdma_srq_size": 0, 00:21:42.101 "io_path_stat": false, 00:21:42.101 "allow_accel_sequence": false, 00:21:42.101 "rdma_max_cq_size": 0, 00:21:42.101 "rdma_cm_event_timeout_ms": 0, 00:21:42.102 "dhchap_digests": [ 00:21:42.102 "sha256", 00:21:42.102 "sha384", 00:21:42.102 "sha512" 00:21:42.102 ], 00:21:42.102 "dhchap_dhgroups": [ 00:21:42.102 "null", 00:21:42.102 "ffdhe2048", 00:21:42.102 "ffdhe3072", 00:21:42.102 "ffdhe4096", 00:21:42.102 "ffdhe6144", 00:21:42.102 "ffdhe8192" 00:21:42.102 ] 00:21:42.102 } 00:21:42.102 }, 00:21:42.102 { 00:21:42.102 "method": "bdev_nvme_attach_controller", 00:21:42.102 "params": { 00:21:42.102 "name": "nvme0", 00:21:42.102 "trtype": "TCP", 00:21:42.102 "adrfam": "IPv4", 00:21:42.102 "traddr": "10.0.0.2", 00:21:42.102 "trsvcid": "4420", 00:21:42.102 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:42.102 "prchk_reftag": false, 00:21:42.102 "prchk_guard": false, 00:21:42.102 "ctrlr_loss_timeout_sec": 0, 00:21:42.102 "reconnect_delay_sec": 0, 00:21:42.102 "fast_io_fail_timeout_sec": 0, 00:21:42.102 "psk": "key0", 00:21:42.102 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:42.102 "hdgst": false, 00:21:42.102 "ddgst": false 00:21:42.102 } 00:21:42.102 }, 00:21:42.102 { 00:21:42.102 "method": "bdev_nvme_set_hotplug", 00:21:42.102 "params": { 00:21:42.102 "period_us": 100000, 00:21:42.102 "enable": false 00:21:42.102 } 00:21:42.102 }, 00:21:42.102 { 00:21:42.102 "method": "bdev_enable_histogram", 00:21:42.102 "params": { 00:21:42.102 "name": "nvme0n1", 00:21:42.102 "enable": true 00:21:42.102 } 00:21:42.102 }, 00:21:42.102 { 00:21:42.102 "method": "bdev_wait_for_examine" 00:21:42.102 } 00:21:42.102 ] 00:21:42.102 }, 00:21:42.102 { 00:21:42.102 "subsystem": "nbd", 00:21:42.102 "config": [] 00:21:42.102 } 00:21:42.102 ] 00:21:42.102 }' 00:21:42.102 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 1490576 00:21:42.102 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1490576 ']' 00:21:42.102 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1490576 00:21:42.102 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:42.102 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:42.102 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1490576 00:21:42.362 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:42.362 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:42.362 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1490576' 00:21:42.362 killing process with pid 1490576 00:21:42.362 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1490576 00:21:42.362 Received shutdown signal, test time was about 1.000000 seconds 00:21:42.362 00:21:42.362 Latency(us) 00:21:42.362 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.362 =================================================================================================================== 00:21:42.362 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:42.362 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1490576 00:21:42.362 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 1490334 00:21:42.362 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1490334 ']' 00:21:42.362 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1490334 00:21:42.362 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:42.362 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:42.362 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1490334 00:21:42.362 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:42.362 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:42.362 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1490334' 00:21:42.362 killing process with pid 1490334 00:21:42.362 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1490334 00:21:42.362 11:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1490334 00:21:42.623 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:21:42.623 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:42.623 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:42.623 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:21:42.623 "subsystems": [ 00:21:42.623 { 00:21:42.623 "subsystem": "keyring", 00:21:42.623 "config": [ 00:21:42.623 { 00:21:42.623 "method": "keyring_file_add_key", 00:21:42.623 "params": { 00:21:42.623 "name": "key0", 00:21:42.623 "path": "/tmp/tmp.aPzY4adk1k" 00:21:42.623 } 00:21:42.623 } 00:21:42.623 ] 00:21:42.623 }, 00:21:42.623 { 00:21:42.623 "subsystem": "iobuf", 00:21:42.623 "config": [ 00:21:42.623 { 00:21:42.623 "method": "iobuf_set_options", 00:21:42.623 "params": { 00:21:42.623 "small_pool_count": 8192, 00:21:42.623 "large_pool_count": 1024, 00:21:42.623 "small_bufsize": 8192, 00:21:42.623 "large_bufsize": 135168 00:21:42.623 } 00:21:42.623 } 00:21:42.623 ] 00:21:42.623 }, 00:21:42.623 { 00:21:42.623 "subsystem": "sock", 00:21:42.623 "config": [ 00:21:42.623 { 00:21:42.623 "method": "sock_set_default_impl", 00:21:42.623 "params": { 00:21:42.623 "impl_name": "posix" 00:21:42.623 } 00:21:42.623 }, 00:21:42.623 { 00:21:42.623 "method": "sock_impl_set_options", 00:21:42.623 "params": { 00:21:42.623 "impl_name": "ssl", 00:21:42.623 "recv_buf_size": 4096, 00:21:42.623 "send_buf_size": 4096, 00:21:42.623 "enable_recv_pipe": true, 00:21:42.623 "enable_quickack": false, 00:21:42.623 "enable_placement_id": 0, 00:21:42.623 "enable_zerocopy_send_server": true, 00:21:42.623 "enable_zerocopy_send_client": false, 00:21:42.623 "zerocopy_threshold": 0, 00:21:42.623 "tls_version": 0, 00:21:42.623 "enable_ktls": false 00:21:42.623 } 00:21:42.623 }, 00:21:42.623 { 00:21:42.623 "method": "sock_impl_set_options", 00:21:42.623 "params": { 00:21:42.623 "impl_name": "posix", 00:21:42.623 "recv_buf_size": 2097152, 00:21:42.623 "send_buf_size": 2097152, 00:21:42.623 "enable_recv_pipe": true, 00:21:42.623 "enable_quickack": false, 00:21:42.623 "enable_placement_id": 0, 00:21:42.623 "enable_zerocopy_send_server": true, 00:21:42.623 "enable_zerocopy_send_client": false, 00:21:42.623 "zerocopy_threshold": 0, 00:21:42.623 "tls_version": 0, 00:21:42.623 "enable_ktls": false 00:21:42.623 } 00:21:42.623 } 00:21:42.623 ] 00:21:42.623 }, 00:21:42.623 { 00:21:42.623 "subsystem": "vmd", 00:21:42.623 "config": [] 00:21:42.623 }, 00:21:42.623 { 00:21:42.623 "subsystem": "accel", 00:21:42.623 "config": [ 00:21:42.623 { 00:21:42.623 "method": "accel_set_options", 00:21:42.623 "params": { 00:21:42.623 "small_cache_size": 128, 00:21:42.623 "large_cache_size": 16, 00:21:42.623 "task_count": 2048, 00:21:42.623 "sequence_count": 2048, 00:21:42.623 "buf_count": 2048 00:21:42.623 } 00:21:42.623 } 00:21:42.623 ] 00:21:42.623 }, 00:21:42.623 { 00:21:42.623 "subsystem": "bdev", 00:21:42.623 "config": [ 00:21:42.623 { 00:21:42.623 "method": "bdev_set_options", 00:21:42.623 "params": { 00:21:42.623 "bdev_io_pool_size": 65535, 00:21:42.623 "bdev_io_cache_size": 256, 00:21:42.623 "bdev_auto_examine": true, 00:21:42.623 "iobuf_small_cache_size": 128, 00:21:42.623 "iobuf_large_cache_size": 16 00:21:42.623 } 00:21:42.623 }, 00:21:42.623 { 00:21:42.623 "method": "bdev_raid_set_options", 00:21:42.623 "params": { 00:21:42.623 "process_window_size_kb": 1024, 00:21:42.623 "process_max_bandwidth_mb_sec": 0 00:21:42.623 } 00:21:42.623 }, 00:21:42.623 { 00:21:42.623 "method": "bdev_iscsi_set_options", 00:21:42.623 "params": { 00:21:42.623 "timeout_sec": 30 00:21:42.623 } 00:21:42.623 }, 00:21:42.623 { 00:21:42.623 "method": "bdev_nvme_set_options", 00:21:42.623 "params": { 00:21:42.623 "action_on_timeout": "none", 00:21:42.623 "timeout_us": 0, 00:21:42.623 "timeout_admin_us": 0, 00:21:42.623 "keep_alive_timeout_ms": 10000, 00:21:42.623 "arbitration_burst": 0, 00:21:42.623 "low_priority_weight": 0, 00:21:42.623 "medium_priority_weight": 0, 00:21:42.623 "high_priority_weight": 0, 00:21:42.623 "nvme_adminq_poll_period_us": 10000, 00:21:42.623 "nvme_ioq_poll_period_us": 0, 00:21:42.623 "io_queue_requests": 0, 00:21:42.623 "delay_cmd_submit": true, 00:21:42.623 "transport_retry_count": 4, 00:21:42.623 "bdev_retry_count": 3, 00:21:42.623 "transport_ack_timeout": 0, 00:21:42.623 "ctrlr_loss_timeout_sec": 0, 00:21:42.623 "reconnect_delay_sec": 0, 00:21:42.623 "fast_io_fail_timeout_sec": 0, 00:21:42.624 "disable_auto_failback": false, 00:21:42.624 "generate_uuids": false, 00:21:42.624 "transport_tos": 0, 00:21:42.624 "nvme_error_stat": false, 00:21:42.624 "rdma_srq_size": 0, 00:21:42.624 "io_path_stat": false, 00:21:42.624 "allow_accel_sequence": false, 00:21:42.624 "rdma_max_cq_size": 0, 00:21:42.624 "rdma_cm_event_timeout_ms": 0, 00:21:42.624 "dhchap_digests": [ 00:21:42.624 "sha256", 00:21:42.624 "sha384", 00:21:42.624 "sha512" 00:21:42.624 ], 00:21:42.624 "dhchap_dhgroups": [ 00:21:42.624 "null", 00:21:42.624 "ffdhe2048", 00:21:42.624 "ffdhe3072", 00:21:42.624 "ffdhe4096", 00:21:42.624 "ffdhe6144", 00:21:42.624 "ffdhe8192" 00:21:42.624 ] 00:21:42.624 } 00:21:42.624 }, 00:21:42.624 { 00:21:42.624 "method": "bdev_nvme_set_hotplug", 00:21:42.624 "params": { 00:21:42.624 "period_us": 100000, 00:21:42.624 "enable": false 00:21:42.624 } 00:21:42.624 }, 00:21:42.624 { 00:21:42.624 "method": "bdev_malloc_create", 00:21:42.624 "params": { 00:21:42.624 "name": "malloc0", 00:21:42.624 "num_blocks": 8192, 00:21:42.624 "block_size": 4096, 00:21:42.624 "physical_block_size": 4096, 00:21:42.624 "uuid": "0f661775-98ff-4522-846b-6e1fdfec9bad", 00:21:42.624 "optimal_io_boundary": 0, 00:21:42.624 "md_size": 0, 00:21:42.624 "dif_type": 0, 00:21:42.624 "dif_is_head_of_md": false, 00:21:42.624 "dif_pi_format": 0 00:21:42.624 } 00:21:42.624 }, 00:21:42.624 { 00:21:42.624 "method": "bdev_wait_for_examine" 00:21:42.624 } 00:21:42.624 ] 00:21:42.624 }, 00:21:42.624 { 00:21:42.624 "subsystem": "nbd", 00:21:42.624 "config": [] 00:21:42.624 }, 00:21:42.624 { 00:21:42.624 "subsystem": "scheduler", 00:21:42.624 "config": [ 00:21:42.624 { 00:21:42.624 "method": "framework_set_scheduler", 00:21:42.624 "params": { 00:21:42.624 "name": "static" 00:21:42.624 } 00:21:42.624 } 00:21:42.624 ] 00:21:42.624 }, 00:21:42.624 { 00:21:42.624 "subsystem": "nvmf", 00:21:42.624 "config": [ 00:21:42.624 { 00:21:42.624 "method": "nvmf_set_config", 00:21:42.624 "params": { 00:21:42.624 "discovery_filter": "match_any", 00:21:42.624 "admin_cmd_passthru": { 00:21:42.624 "identify_ctrlr": false 00:21:42.624 } 00:21:42.624 } 00:21:42.624 }, 00:21:42.624 { 00:21:42.624 "method": "nvmf_set_max_subsystems", 00:21:42.624 "params": { 00:21:42.624 "max_subsystems": 1024 00:21:42.624 } 00:21:42.624 }, 00:21:42.624 { 00:21:42.624 "method": "nvmf_set_crdt", 00:21:42.624 "params": { 00:21:42.624 "crdt1": 0, 00:21:42.624 "crdt2": 0, 00:21:42.624 "crdt3": 0 00:21:42.624 } 00:21:42.624 }, 00:21:42.624 { 00:21:42.624 "method": "nvmf_create_transport", 00:21:42.624 "params": { 00:21:42.624 "trtype": "TCP", 00:21:42.624 "max_queue_depth": 128, 00:21:42.624 "max_io_qpairs_per_ctrlr": 127, 00:21:42.624 "in_capsule_data_size": 4096, 00:21:42.624 "max_io_size": 131072, 00:21:42.624 "io_unit_size": 131072, 00:21:42.624 "max_aq_depth": 128, 00:21:42.624 "num_shared_buffers": 511, 00:21:42.624 "buf_cache_size": 4294967295, 00:21:42.624 "dif_insert_or_strip": false, 00:21:42.624 "zcopy": false, 00:21:42.624 "c2h_success": false, 00:21:42.624 "sock_priority": 0, 00:21:42.624 "abort_timeout_sec": 1, 00:21:42.624 "ack_timeout": 0, 00:21:42.624 "data_wr_pool_size": 0 00:21:42.624 } 00:21:42.624 }, 00:21:42.624 { 00:21:42.624 "method": "nvmf_create_subsystem", 00:21:42.624 "params": { 00:21:42.624 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:42.624 "allow_any_host": false, 00:21:42.624 "serial_number": "00000000000000000000", 00:21:42.624 "model_number": "SPDK bdev Controller", 00:21:42.624 "max_namespaces": 32, 00:21:42.624 "min_cntlid": 1, 00:21:42.624 "max_cntlid": 65519, 00:21:42.624 "ana_reporting": false 00:21:42.624 } 00:21:42.624 }, 00:21:42.624 { 00:21:42.624 "method": "nvmf_subsystem_add_host", 00:21:42.624 "params": { 00:21:42.624 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:42.624 "host": "nqn.2016-06.io.spdk:host1", 00:21:42.624 "psk": "key0" 00:21:42.624 } 00:21:42.624 }, 00:21:42.624 { 00:21:42.624 "method": "nvmf_subsystem_add_ns", 00:21:42.624 "params": { 00:21:42.624 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:42.624 "namespace": { 00:21:42.624 "nsid": 1, 00:21:42.624 "bdev_name": "malloc0", 00:21:42.624 "nguid": "0F66177598FF4522846B6E1FDFEC9BAD", 00:21:42.624 "uuid": "0f661775-98ff-4522-846b-6e1fdfec9bad", 00:21:42.624 "no_auto_visible": false 00:21:42.624 } 00:21:42.624 } 00:21:42.624 }, 00:21:42.624 { 00:21:42.624 "method": "nvmf_subsystem_add_listener", 00:21:42.624 "params": { 00:21:42.624 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:42.624 "listen_address": { 00:21:42.624 "trtype": "TCP", 00:21:42.624 "adrfam": "IPv4", 00:21:42.624 "traddr": "10.0.0.2", 00:21:42.624 "trsvcid": "4420" 00:21:42.624 }, 00:21:42.624 "secure_channel": false, 00:21:42.624 "sock_impl": "ssl" 00:21:42.624 } 00:21:42.624 } 00:21:42.624 ] 00:21:42.624 } 00:21:42.624 ] 00:21:42.624 }' 00:21:42.624 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:42.624 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1491061 00:21:42.624 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1491061 00:21:42.624 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:42.624 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1491061 ']' 00:21:42.624 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.624 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:42.624 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:42.624 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:42.624 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:42.624 [2024-07-26 11:10:02.091539] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:42.624 [2024-07-26 11:10:02.091586] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:42.624 EAL: No free 2048 kB hugepages reported on node 1 00:21:42.885 [2024-07-26 11:10:02.149925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.885 [2024-07-26 11:10:02.216861] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:42.885 [2024-07-26 11:10:02.216901] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:42.885 [2024-07-26 11:10:02.216908] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:42.885 [2024-07-26 11:10:02.216915] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:42.885 [2024-07-26 11:10:02.216919] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:42.885 [2024-07-26 11:10:02.216972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.145 [2024-07-26 11:10:02.428894] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:43.145 [2024-07-26 11:10:02.469314] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:43.145 [2024-07-26 11:10:02.469485] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:43.405 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:43.405 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:43.405 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:43.405 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:43.405 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:43.666 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:43.666 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=1491307 00:21:43.666 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 1491307 /var/tmp/bdevperf.sock 00:21:43.666 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1491307 ']' 00:21:43.666 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:43.666 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:43.666 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:43.666 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:43.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:43.666 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:21:43.666 "subsystems": [ 00:21:43.666 { 00:21:43.666 "subsystem": "keyring", 00:21:43.666 "config": [ 00:21:43.666 { 00:21:43.666 "method": "keyring_file_add_key", 00:21:43.666 "params": { 00:21:43.666 "name": "key0", 00:21:43.666 "path": "/tmp/tmp.aPzY4adk1k" 00:21:43.666 } 00:21:43.666 } 00:21:43.666 ] 00:21:43.666 }, 00:21:43.666 { 00:21:43.666 "subsystem": "iobuf", 00:21:43.666 "config": [ 00:21:43.666 { 00:21:43.666 "method": "iobuf_set_options", 00:21:43.666 "params": { 00:21:43.666 "small_pool_count": 8192, 00:21:43.666 "large_pool_count": 1024, 00:21:43.666 "small_bufsize": 8192, 00:21:43.666 "large_bufsize": 135168 00:21:43.666 } 00:21:43.666 } 00:21:43.666 ] 00:21:43.666 }, 00:21:43.666 { 00:21:43.666 "subsystem": "sock", 00:21:43.666 "config": [ 00:21:43.666 { 00:21:43.666 "method": "sock_set_default_impl", 00:21:43.666 "params": { 00:21:43.666 "impl_name": "posix" 00:21:43.666 } 00:21:43.666 }, 00:21:43.666 { 00:21:43.666 "method": "sock_impl_set_options", 00:21:43.666 "params": { 00:21:43.666 "impl_name": "ssl", 00:21:43.666 "recv_buf_size": 4096, 00:21:43.666 "send_buf_size": 4096, 00:21:43.667 "enable_recv_pipe": true, 00:21:43.667 "enable_quickack": false, 00:21:43.667 "enable_placement_id": 0, 00:21:43.667 "enable_zerocopy_send_server": true, 00:21:43.667 "enable_zerocopy_send_client": false, 00:21:43.667 "zerocopy_threshold": 0, 00:21:43.667 "tls_version": 0, 00:21:43.667 "enable_ktls": false 00:21:43.667 } 00:21:43.667 }, 00:21:43.667 { 00:21:43.667 "method": "sock_impl_set_options", 00:21:43.667 "params": { 00:21:43.667 "impl_name": "posix", 00:21:43.667 "recv_buf_size": 2097152, 00:21:43.667 "send_buf_size": 2097152, 00:21:43.667 "enable_recv_pipe": true, 00:21:43.667 "enable_quickack": false, 00:21:43.667 "enable_placement_id": 0, 00:21:43.667 "enable_zerocopy_send_server": true, 00:21:43.667 "enable_zerocopy_send_client": false, 00:21:43.667 "zerocopy_threshold": 0, 00:21:43.667 "tls_version": 0, 00:21:43.667 "enable_ktls": false 00:21:43.667 } 00:21:43.667 } 00:21:43.667 ] 00:21:43.667 }, 00:21:43.667 { 00:21:43.667 "subsystem": "vmd", 00:21:43.667 "config": [] 00:21:43.667 }, 00:21:43.667 { 00:21:43.667 "subsystem": "accel", 00:21:43.667 "config": [ 00:21:43.667 { 00:21:43.667 "method": "accel_set_options", 00:21:43.667 "params": { 00:21:43.667 "small_cache_size": 128, 00:21:43.667 "large_cache_size": 16, 00:21:43.667 "task_count": 2048, 00:21:43.667 "sequence_count": 2048, 00:21:43.667 "buf_count": 2048 00:21:43.667 } 00:21:43.667 } 00:21:43.667 ] 00:21:43.667 }, 00:21:43.667 { 00:21:43.667 "subsystem": "bdev", 00:21:43.667 "config": [ 00:21:43.667 { 00:21:43.667 "method": "bdev_set_options", 00:21:43.667 "params": { 00:21:43.667 "bdev_io_pool_size": 65535, 00:21:43.667 "bdev_io_cache_size": 256, 00:21:43.667 "bdev_auto_examine": true, 00:21:43.667 "iobuf_small_cache_size": 128, 00:21:43.667 "iobuf_large_cache_size": 16 00:21:43.667 } 00:21:43.667 }, 00:21:43.667 { 00:21:43.667 "method": "bdev_raid_set_options", 00:21:43.667 "params": { 00:21:43.667 "process_window_size_kb": 1024, 00:21:43.667 "process_max_bandwidth_mb_sec": 0 00:21:43.667 } 00:21:43.667 }, 00:21:43.667 { 00:21:43.667 "method": "bdev_iscsi_set_options", 00:21:43.667 "params": { 00:21:43.667 "timeout_sec": 30 00:21:43.667 } 00:21:43.667 }, 00:21:43.667 { 00:21:43.667 "method": "bdev_nvme_set_options", 00:21:43.667 "params": { 00:21:43.667 "action_on_timeout": "none", 00:21:43.667 "timeout_us": 0, 00:21:43.667 "timeout_admin_us": 0, 00:21:43.667 "keep_alive_timeout_ms": 10000, 00:21:43.667 "arbitration_burst": 0, 00:21:43.667 "low_priority_weight": 0, 00:21:43.667 "medium_priority_weight": 0, 00:21:43.667 "high_priority_weight": 0, 00:21:43.667 "nvme_adminq_poll_period_us": 10000, 00:21:43.667 "nvme_ioq_poll_period_us": 0, 00:21:43.667 "io_queue_requests": 512, 00:21:43.667 "delay_cmd_submit": true, 00:21:43.667 "transport_retry_count": 4, 00:21:43.667 "bdev_retry_count": 3, 00:21:43.667 "transport_ack_timeout": 0, 00:21:43.667 "ctrlr_loss_timeout_sec": 0, 00:21:43.667 "reconnect_delay_sec": 0, 00:21:43.667 "fast_io_fail_timeout_sec": 0, 00:21:43.667 "disable_auto_failback": false, 00:21:43.667 "generate_uuids": false, 00:21:43.667 "transport_tos": 0, 00:21:43.667 "nvme_error_stat": false, 00:21:43.667 "rdma_srq_size": 0, 00:21:43.667 "io_path_stat": false, 00:21:43.667 "allow_accel_sequence": false, 00:21:43.667 "rdma_max_cq_size": 0, 00:21:43.667 "rdma_cm_event_timeout_ms": 0, 00:21:43.667 "dhchap_digests": [ 00:21:43.667 "sha256", 00:21:43.667 "sha384", 00:21:43.667 "sha512" 00:21:43.667 ], 00:21:43.667 "dhchap_dhgroups": [ 00:21:43.667 "null", 00:21:43.667 "ffdhe2048", 00:21:43.667 "ffdhe3072", 00:21:43.667 "ffdhe4096", 00:21:43.667 "ffdhe6144", 00:21:43.667 "ffdhe8192" 00:21:43.667 ] 00:21:43.667 } 00:21:43.667 }, 00:21:43.667 { 00:21:43.667 "method": "bdev_nvme_attach_controller", 00:21:43.667 "params": { 00:21:43.667 "name": "nvme0", 00:21:43.667 "trtype": "TCP", 00:21:43.667 "adrfam": "IPv4", 00:21:43.667 "traddr": "10.0.0.2", 00:21:43.667 "trsvcid": "4420", 00:21:43.667 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.667 "prchk_reftag": false, 00:21:43.667 "prchk_guard": false, 00:21:43.667 "ctrlr_loss_timeout_sec": 0, 00:21:43.667 "reconnect_delay_sec": 0, 00:21:43.667 "fast_io_fail_timeout_sec": 0, 00:21:43.667 "psk": "key0", 00:21:43.667 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:43.667 "hdgst": false, 00:21:43.667 "ddgst": false 00:21:43.667 } 00:21:43.667 }, 00:21:43.667 { 00:21:43.667 "method": "bdev_nvme_set_hotplug", 00:21:43.667 "params": { 00:21:43.667 "period_us": 100000, 00:21:43.667 "enable": false 00:21:43.667 } 00:21:43.667 }, 00:21:43.667 { 00:21:43.667 "method": "bdev_enable_histogram", 00:21:43.667 "params": { 00:21:43.667 "name": "nvme0n1", 00:21:43.667 "enable": true 00:21:43.667 } 00:21:43.667 }, 00:21:43.667 { 00:21:43.667 "method": "bdev_wait_for_examine" 00:21:43.667 } 00:21:43.667 ] 00:21:43.667 }, 00:21:43.667 { 00:21:43.667 "subsystem": "nbd", 00:21:43.667 "config": [] 00:21:43.667 } 00:21:43.667 ] 00:21:43.667 }' 00:21:43.667 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:43.667 11:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:43.667 [2024-07-26 11:10:02.975931] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:43.667 [2024-07-26 11:10:02.975980] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1491307 ] 00:21:43.667 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.667 [2024-07-26 11:10:03.031081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.667 [2024-07-26 11:10:03.103960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:43.927 [2024-07-26 11:10:03.254274] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:44.497 11:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:44.497 11:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:44.497 11:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:44.497 11:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:21:44.497 11:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.497 11:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:44.767 Running I/O for 1 seconds... 00:21:45.708 00:21:45.708 Latency(us) 00:21:45.708 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.708 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:45.708 Verification LBA range: start 0x0 length 0x2000 00:21:45.708 nvme0n1 : 1.06 995.05 3.89 0.00 0.00 125799.75 6325.65 184184.65 00:21:45.708 =================================================================================================================== 00:21:45.708 Total : 995.05 3.89 0.00 0.00 125799.75 6325.65 184184.65 00:21:45.708 0 00:21:45.708 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:21:45.708 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:21:45.708 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:45.708 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:21:45.708 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:21:45.708 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:21:45.708 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:45.708 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:21:45.708 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:21:45.708 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:21:45.708 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:45.708 nvmf_trace.0 00:21:45.968 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:21:45.968 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1491307 00:21:45.968 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1491307 ']' 00:21:45.968 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1491307 00:21:45.968 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:45.968 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:45.968 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1491307 00:21:45.968 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:45.968 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:45.968 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1491307' 00:21:45.968 killing process with pid 1491307 00:21:45.968 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1491307 00:21:45.968 Received shutdown signal, test time was about 1.000000 seconds 00:21:45.968 00:21:45.968 Latency(us) 00:21:45.968 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.968 =================================================================================================================== 00:21:45.968 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:45.968 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1491307 00:21:45.968 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:45.968 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:45.968 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:21:45.968 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:45.968 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:21:45.968 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:45.968 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:45.968 rmmod nvme_tcp 00:21:46.229 rmmod nvme_fabrics 00:21:46.229 rmmod nvme_keyring 00:21:46.229 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:46.229 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:21:46.229 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:21:46.229 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1491061 ']' 00:21:46.229 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1491061 00:21:46.229 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1491061 ']' 00:21:46.229 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1491061 00:21:46.229 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:46.229 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:46.229 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1491061 00:21:46.229 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:46.229 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:46.229 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1491061' 00:21:46.229 killing process with pid 1491061 00:21:46.229 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1491061 00:21:46.229 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1491061 00:21:46.490 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:46.490 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:46.490 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:46.490 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:46.490 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:46.490 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.490 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:46.490 11:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.402 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:48.402 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.PbUxqVO757 /tmp/tmp.U7JGLd8LRK /tmp/tmp.aPzY4adk1k 00:21:48.402 00:21:48.402 real 1m24.642s 00:21:48.402 user 2m13.612s 00:21:48.402 sys 0m25.852s 00:21:48.402 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:48.402 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:48.402 ************************************ 00:21:48.402 END TEST nvmf_tls 00:21:48.402 ************************************ 00:21:48.402 11:10:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:48.402 11:10:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:48.402 11:10:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:48.402 11:10:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:48.402 ************************************ 00:21:48.402 START TEST nvmf_fips 00:21:48.402 ************************************ 00:21:48.402 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:48.664 * Looking for test storage... 00:21:48.664 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:48.664 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:48.664 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:48.664 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:48.664 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:48.664 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:48.664 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:48.664 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:48.664 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:48.664 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:48.664 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:48.664 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:48.664 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:48.664 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:48.664 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:48.664 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:48.664 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:48.664 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:48.664 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:48.664 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:48.664 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:48.664 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:48.664 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:48.664 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.664 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.664 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.664 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:48.664 11:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.664 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:21:48.664 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:48.664 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:48.664 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:48.664 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:48.664 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:48.664 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:48.664 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:48.664 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:48.664 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:48.664 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:21:48.664 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:21:48.664 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:21:48.664 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:48.664 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:48.664 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:48.664 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:21:48.664 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:21:48.664 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:21:48.664 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:21:48.664 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:21:48.664 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:21:48.664 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:21:48.664 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:21:48.664 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:21:48.664 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:21:48.664 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:21:48.664 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:48.664 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:21:48.664 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:48.664 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:21:48.664 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:48.664 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:21:48.665 Error setting digest 00:21:48.665 00526267017F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:48.665 00526267017F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:48.665 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.926 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:48.926 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.926 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:48.926 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:48.926 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:21:48.926 11:10:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:54.214 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:54.214 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:54.214 Found net devices under 0000:86:00.0: cvl_0_0 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:54.214 Found net devices under 0000:86:00.1: cvl_0_1 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:54.214 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:54.215 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:54.215 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:54.215 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:54.215 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:54.215 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:54.215 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:54.215 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:54.215 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:54.215 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:54.215 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:54.215 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:54.215 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:54.215 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:54.215 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:54.215 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:54.215 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:54.215 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:54.215 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:54.215 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:21:54.215 00:21:54.215 --- 10.0.0.2 ping statistics --- 00:21:54.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.215 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:21:54.215 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:54.215 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:54.215 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.452 ms 00:21:54.215 00:21:54.215 --- 10.0.0.1 ping statistics --- 00:21:54.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.215 rtt min/avg/max/mdev = 0.452/0.452/0.452/0.000 ms 00:21:54.215 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:54.215 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:21:54.215 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:54.215 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:54.215 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:54.215 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:54.215 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:54.215 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:54.215 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:54.215 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:21:54.215 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:54.215 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:54.215 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:54.215 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:54.215 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1495103 00:21:54.215 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1495103 00:21:54.215 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1495103 ']' 00:21:54.215 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:54.215 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:54.215 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:54.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:54.215 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:54.215 11:10:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:54.215 [2024-07-26 11:10:13.575793] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:54.215 [2024-07-26 11:10:13.575842] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:54.215 EAL: No free 2048 kB hugepages reported on node 1 00:21:54.215 [2024-07-26 11:10:13.633490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:54.475 [2024-07-26 11:10:13.713844] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:54.475 [2024-07-26 11:10:13.713878] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:54.475 [2024-07-26 11:10:13.713885] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:54.475 [2024-07-26 11:10:13.713892] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:54.475 [2024-07-26 11:10:13.713897] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:54.475 [2024-07-26 11:10:13.713912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:55.045 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:55.045 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:21:55.045 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:55.045 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:55.045 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:55.045 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:55.045 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:21:55.045 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:55.045 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:55.045 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:55.045 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:55.045 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:55.045 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:55.045 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:55.306 [2024-07-26 11:10:14.566059] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:55.306 [2024-07-26 11:10:14.582069] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:55.306 [2024-07-26 11:10:14.582214] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:55.306 [2024-07-26 11:10:14.610334] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:55.306 malloc0 00:21:55.306 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:55.306 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1495357 00:21:55.306 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:55.306 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1495357 /var/tmp/bdevperf.sock 00:21:55.306 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1495357 ']' 00:21:55.306 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:55.306 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:55.306 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:55.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:55.306 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:55.306 11:10:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:55.306 [2024-07-26 11:10:14.689624] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:55.306 [2024-07-26 11:10:14.689671] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1495357 ] 00:21:55.306 EAL: No free 2048 kB hugepages reported on node 1 00:21:55.306 [2024-07-26 11:10:14.739055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.566 [2024-07-26 11:10:14.811565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:56.136 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:56.136 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:21:56.136 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:56.136 [2024-07-26 11:10:15.630371] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:56.136 [2024-07-26 11:10:15.630448] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:56.396 TLSTESTn1 00:21:56.396 11:10:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:56.396 Running I/O for 10 seconds... 00:22:08.678 00:22:08.678 Latency(us) 00:22:08.678 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:08.678 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:08.678 Verification LBA range: start 0x0 length 0x2000 00:22:08.678 TLSTESTn1 : 10.09 1137.43 4.44 0.00 0.00 112139.54 6297.15 165948.55 00:22:08.678 =================================================================================================================== 00:22:08.678 Total : 1137.43 4.44 0.00 0.00 112139.54 6297.15 165948.55 00:22:08.678 0 00:22:08.678 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:08.678 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:08.678 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:22:08.678 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:22:08.678 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:22:08.678 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:08.678 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:22:08.678 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:22:08.678 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:22:08.678 11:10:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:08.678 nvmf_trace.0 00:22:08.678 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:22:08.678 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1495357 00:22:08.678 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1495357 ']' 00:22:08.678 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1495357 00:22:08.678 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:22:08.678 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:08.678 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1495357 00:22:08.678 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:08.678 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:08.678 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1495357' 00:22:08.678 killing process with pid 1495357 00:22:08.678 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1495357 00:22:08.678 Received shutdown signal, test time was about 10.000000 seconds 00:22:08.678 00:22:08.678 Latency(us) 00:22:08.678 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:08.678 =================================================================================================================== 00:22:08.678 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:08.678 [2024-07-26 11:10:26.073560] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:08.678 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1495357 00:22:08.678 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:08.678 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:08.678 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:22:08.678 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:08.678 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:22:08.678 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:08.678 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:08.678 rmmod nvme_tcp 00:22:08.678 rmmod nvme_fabrics 00:22:08.678 rmmod nvme_keyring 00:22:08.678 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:08.678 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:22:08.678 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:22:08.678 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1495103 ']' 00:22:08.678 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1495103 00:22:08.678 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1495103 ']' 00:22:08.678 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1495103 00:22:08.678 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:22:08.678 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:08.678 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1495103 00:22:08.678 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:08.678 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:08.678 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1495103' 00:22:08.678 killing process with pid 1495103 00:22:08.678 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1495103 00:22:08.678 [2024-07-26 11:10:26.361719] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:08.678 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1495103 00:22:08.678 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:08.678 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:08.678 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:08.678 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:08.678 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:08.678 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.678 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:08.678 11:10:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.247 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:09.247 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:09.247 00:22:09.247 real 0m20.734s 00:22:09.247 user 0m23.642s 00:22:09.247 sys 0m7.939s 00:22:09.247 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:09.247 11:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:09.247 ************************************ 00:22:09.247 END TEST nvmf_fips 00:22:09.247 ************************************ 00:22:09.247 11:10:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:22:09.247 11:10:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:22:09.247 11:10:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:22:09.247 11:10:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:22:09.247 11:10:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:22:09.248 11:10:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:14.530 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:14.530 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:14.530 Found net devices under 0000:86:00.0: cvl_0_0 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.530 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:14.531 Found net devices under 0000:86:00.1: cvl_0_1 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:14.531 ************************************ 00:22:14.531 START TEST nvmf_perf_adq 00:22:14.531 ************************************ 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:14.531 * Looking for test storage... 00:22:14.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:14.531 11:10:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:19.817 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:19.817 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:19.817 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:19.817 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:19.817 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:19.818 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:19.818 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:19.818 Found net devices under 0000:86:00.0: cvl_0_0 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:19.818 Found net devices under 0000:86:00.1: cvl_0_1 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:22:19.818 11:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:20.389 11:10:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:22.299 11:10:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:27.613 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:27.613 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:27.613 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:27.614 Found net devices under 0000:86:00.0: cvl_0_0 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:27.614 Found net devices under 0000:86:00.1: cvl_0_1 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:27.614 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:27.614 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:22:27.614 00:22:27.614 --- 10.0.0.2 ping statistics --- 00:22:27.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:27.614 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:27.614 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:27.614 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:22:27.614 00:22:27.614 --- 10.0.0.1 ping statistics --- 00:22:27.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:27.614 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1505042 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1505042 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1505042 ']' 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:27.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:27.614 11:10:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:27.614 [2024-07-26 11:10:46.901872] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:27.614 [2024-07-26 11:10:46.901920] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:27.614 EAL: No free 2048 kB hugepages reported on node 1 00:22:27.614 [2024-07-26 11:10:46.960862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:27.614 [2024-07-26 11:10:47.034693] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:27.614 [2024-07-26 11:10:47.034734] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:27.614 [2024-07-26 11:10:47.034741] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:27.614 [2024-07-26 11:10:47.034746] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:27.614 [2024-07-26 11:10:47.034751] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:27.614 [2024-07-26 11:10:47.034848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:27.614 [2024-07-26 11:10:47.034945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:27.614 [2024-07-26 11:10:47.035009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:27.614 [2024-07-26 11:10:47.035010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.554 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:28.554 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:22:28.554 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:28.554 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:28.554 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.554 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:28.554 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:22:28.554 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:28.554 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:28.554 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.554 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.554 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.554 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:28.554 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:28.554 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.554 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.554 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.554 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:28.554 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.554 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.554 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.554 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:28.554 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.554 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.554 [2024-07-26 11:10:47.902971] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:28.554 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.554 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:28.554 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.554 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.554 Malloc1 00:22:28.554 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.554 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:28.554 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.554 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.554 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.554 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:28.554 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.554 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.554 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.554 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:28.554 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.554 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:28.554 [2024-07-26 11:10:47.954586] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:28.554 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.554 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1505294 00:22:28.554 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:22:28.554 11:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:28.554 EAL: No free 2048 kB hugepages reported on node 1 00:22:31.089 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:22:31.089 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.089 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.089 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.089 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:22:31.089 "tick_rate": 2300000000, 00:22:31.089 "poll_groups": [ 00:22:31.089 { 00:22:31.089 "name": "nvmf_tgt_poll_group_000", 00:22:31.089 "admin_qpairs": 1, 00:22:31.089 "io_qpairs": 1, 00:22:31.089 "current_admin_qpairs": 1, 00:22:31.089 "current_io_qpairs": 1, 00:22:31.089 "pending_bdev_io": 0, 00:22:31.089 "completed_nvme_io": 18896, 00:22:31.089 "transports": [ 00:22:31.089 { 00:22:31.089 "trtype": "TCP" 00:22:31.089 } 00:22:31.089 ] 00:22:31.089 }, 00:22:31.089 { 00:22:31.089 "name": "nvmf_tgt_poll_group_001", 00:22:31.089 "admin_qpairs": 0, 00:22:31.089 "io_qpairs": 1, 00:22:31.089 "current_admin_qpairs": 0, 00:22:31.089 "current_io_qpairs": 1, 00:22:31.089 "pending_bdev_io": 0, 00:22:31.089 "completed_nvme_io": 19213, 00:22:31.089 "transports": [ 00:22:31.089 { 00:22:31.089 "trtype": "TCP" 00:22:31.089 } 00:22:31.089 ] 00:22:31.089 }, 00:22:31.089 { 00:22:31.089 "name": "nvmf_tgt_poll_group_002", 00:22:31.089 "admin_qpairs": 0, 00:22:31.089 "io_qpairs": 1, 00:22:31.089 "current_admin_qpairs": 0, 00:22:31.089 "current_io_qpairs": 1, 00:22:31.089 "pending_bdev_io": 0, 00:22:31.089 "completed_nvme_io": 18780, 00:22:31.089 "transports": [ 00:22:31.089 { 00:22:31.089 "trtype": "TCP" 00:22:31.089 } 00:22:31.089 ] 00:22:31.089 }, 00:22:31.089 { 00:22:31.089 "name": "nvmf_tgt_poll_group_003", 00:22:31.089 "admin_qpairs": 0, 00:22:31.089 "io_qpairs": 1, 00:22:31.089 "current_admin_qpairs": 0, 00:22:31.089 "current_io_qpairs": 1, 00:22:31.089 "pending_bdev_io": 0, 00:22:31.089 "completed_nvme_io": 19094, 00:22:31.089 "transports": [ 00:22:31.089 { 00:22:31.089 "trtype": "TCP" 00:22:31.089 } 00:22:31.089 ] 00:22:31.089 } 00:22:31.089 ] 00:22:31.089 }' 00:22:31.089 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:31.089 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:22:31.089 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:22:31.089 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:22:31.089 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1505294 00:22:39.268 Initializing NVMe Controllers 00:22:39.268 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:39.268 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:39.268 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:39.268 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:39.268 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:39.268 Initialization complete. Launching workers. 00:22:39.268 ======================================================== 00:22:39.268 Latency(us) 00:22:39.268 Device Information : IOPS MiB/s Average min max 00:22:39.268 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10127.83 39.56 6321.00 1505.08 18750.48 00:22:39.268 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10246.62 40.03 6247.18 2067.35 17696.84 00:22:39.268 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10038.83 39.21 6376.51 1707.82 20113.35 00:22:39.268 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10023.93 39.16 6384.90 1349.09 20501.63 00:22:39.268 ======================================================== 00:22:39.268 Total : 40437.20 157.96 6331.92 1349.09 20501.63 00:22:39.268 00:22:39.268 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:22:39.268 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:39.268 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:39.268 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:39.268 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:39.268 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:39.268 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:39.269 rmmod nvme_tcp 00:22:39.269 rmmod nvme_fabrics 00:22:39.269 rmmod nvme_keyring 00:22:39.269 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:39.269 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:39.269 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:39.269 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1505042 ']' 00:22:39.269 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1505042 00:22:39.269 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1505042 ']' 00:22:39.269 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1505042 00:22:39.269 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:22:39.269 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:39.269 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1505042 00:22:39.269 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:39.269 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:39.269 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1505042' 00:22:39.269 killing process with pid 1505042 00:22:39.269 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1505042 00:22:39.269 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1505042 00:22:39.269 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:39.269 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:39.269 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:39.269 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:39.269 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:39.269 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.269 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:39.269 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.177 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:41.177 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:22:41.177 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:42.559 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:43.940 11:11:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:49.223 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:49.223 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:49.223 Found net devices under 0000:86:00.0: cvl_0_0 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:49.223 Found net devices under 0000:86:00.1: cvl_0_1 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:49.223 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:49.224 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:49.224 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:49.224 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:49.224 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:49.224 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:49.224 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:49.224 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:49.224 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:49.224 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:49.224 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:49.224 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:49.224 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:49.224 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:49.224 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:49.224 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:49.224 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:22:49.224 00:22:49.224 --- 10.0.0.2 ping statistics --- 00:22:49.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.224 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:22:49.224 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:49.224 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:49.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.591 ms 00:22:49.224 00:22:49.224 --- 10.0.0.1 ping statistics --- 00:22:49.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.224 rtt min/avg/max/mdev = 0.591/0.591/0.591/0.000 ms 00:22:49.484 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:49.484 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:49.484 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:49.484 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:49.484 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:49.484 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:49.484 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:49.484 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:49.484 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:49.484 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:22:49.484 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:49.484 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:49.485 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:49.485 net.core.busy_poll = 1 00:22:49.485 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:49.485 net.core.busy_read = 1 00:22:49.485 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:49.485 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:49.485 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:49.485 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:49.485 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:49.485 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:49.485 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:49.485 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:49.485 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:49.485 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1509090 00:22:49.485 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1509090 00:22:49.485 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:49.485 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1509090 ']' 00:22:49.485 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.485 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:49.485 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.485 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:49.485 11:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:49.745 [2024-07-26 11:11:09.012542] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:49.745 [2024-07-26 11:11:09.012591] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:49.745 EAL: No free 2048 kB hugepages reported on node 1 00:22:49.745 [2024-07-26 11:11:09.069502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:49.745 [2024-07-26 11:11:09.142398] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:49.745 [2024-07-26 11:11:09.142437] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:49.745 [2024-07-26 11:11:09.142443] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:49.745 [2024-07-26 11:11:09.142449] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:49.745 [2024-07-26 11:11:09.142454] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:49.745 [2024-07-26 11:11:09.142511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.745 [2024-07-26 11:11:09.142607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:49.745 [2024-07-26 11:11:09.142696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:49.745 [2024-07-26 11:11:09.142697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.687 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:50.687 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:22:50.687 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:50.687 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:50.687 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:50.687 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:50.687 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:22:50.687 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:50.687 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:50.687 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.687 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:50.687 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.687 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:50.687 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:50.687 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.687 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:50.687 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.687 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:50.687 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.687 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:50.687 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.687 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:50.687 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.687 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:50.687 [2024-07-26 11:11:10.028299] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:50.687 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.687 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:50.687 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.687 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:50.687 Malloc1 00:22:50.687 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.687 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:50.687 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.687 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:50.687 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.687 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:50.687 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.687 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:50.687 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.687 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:50.687 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.687 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:50.687 [2024-07-26 11:11:10.076618] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:50.687 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.687 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1509340 00:22:50.687 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:22:50.687 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:50.687 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.597 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:22:52.597 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.597 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:52.858 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.858 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:22:52.858 "tick_rate": 2300000000, 00:22:52.858 "poll_groups": [ 00:22:52.858 { 00:22:52.858 "name": "nvmf_tgt_poll_group_000", 00:22:52.858 "admin_qpairs": 1, 00:22:52.858 "io_qpairs": 2, 00:22:52.858 "current_admin_qpairs": 1, 00:22:52.858 "current_io_qpairs": 2, 00:22:52.858 "pending_bdev_io": 0, 00:22:52.858 "completed_nvme_io": 27172, 00:22:52.858 "transports": [ 00:22:52.858 { 00:22:52.858 "trtype": "TCP" 00:22:52.858 } 00:22:52.858 ] 00:22:52.858 }, 00:22:52.858 { 00:22:52.858 "name": "nvmf_tgt_poll_group_001", 00:22:52.858 "admin_qpairs": 0, 00:22:52.858 "io_qpairs": 2, 00:22:52.858 "current_admin_qpairs": 0, 00:22:52.858 "current_io_qpairs": 2, 00:22:52.858 "pending_bdev_io": 0, 00:22:52.858 "completed_nvme_io": 27322, 00:22:52.858 "transports": [ 00:22:52.858 { 00:22:52.858 "trtype": "TCP" 00:22:52.858 } 00:22:52.858 ] 00:22:52.858 }, 00:22:52.858 { 00:22:52.858 "name": "nvmf_tgt_poll_group_002", 00:22:52.858 "admin_qpairs": 0, 00:22:52.858 "io_qpairs": 0, 00:22:52.858 "current_admin_qpairs": 0, 00:22:52.858 "current_io_qpairs": 0, 00:22:52.858 "pending_bdev_io": 0, 00:22:52.858 "completed_nvme_io": 0, 00:22:52.858 "transports": [ 00:22:52.858 { 00:22:52.858 "trtype": "TCP" 00:22:52.858 } 00:22:52.858 ] 00:22:52.858 }, 00:22:52.858 { 00:22:52.858 "name": "nvmf_tgt_poll_group_003", 00:22:52.858 "admin_qpairs": 0, 00:22:52.858 "io_qpairs": 0, 00:22:52.858 "current_admin_qpairs": 0, 00:22:52.858 "current_io_qpairs": 0, 00:22:52.858 "pending_bdev_io": 0, 00:22:52.858 "completed_nvme_io": 0, 00:22:52.858 "transports": [ 00:22:52.858 { 00:22:52.858 "trtype": "TCP" 00:22:52.858 } 00:22:52.858 ] 00:22:52.858 } 00:22:52.858 ] 00:22:52.858 }' 00:22:52.858 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:52.858 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:22:52.858 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:22:52.858 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:22:52.858 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1509340 00:23:01.033 Initializing NVMe Controllers 00:23:01.033 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:01.033 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:01.033 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:01.033 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:01.033 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:01.033 Initialization complete. Launching workers. 00:23:01.033 ======================================================== 00:23:01.033 Latency(us) 00:23:01.033 Device Information : IOPS MiB/s Average min max 00:23:01.033 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7347.10 28.70 8740.85 1933.51 52890.03 00:23:01.033 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7320.10 28.59 8748.92 1752.76 53887.95 00:23:01.033 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6917.00 27.02 9256.85 1771.15 53783.50 00:23:01.033 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7437.50 29.05 8605.27 1760.92 54579.01 00:23:01.033 ======================================================== 00:23:01.033 Total : 29021.69 113.37 8831.12 1752.76 54579.01 00:23:01.033 00:23:01.033 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:23:01.033 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:01.033 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:23:01.033 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:01.033 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:23:01.033 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:01.033 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:01.033 rmmod nvme_tcp 00:23:01.033 rmmod nvme_fabrics 00:23:01.033 rmmod nvme_keyring 00:23:01.033 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:01.033 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:23:01.033 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:23:01.033 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1509090 ']' 00:23:01.033 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1509090 00:23:01.033 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1509090 ']' 00:23:01.033 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1509090 00:23:01.033 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:23:01.033 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:01.033 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1509090 00:23:01.033 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:01.033 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:01.033 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1509090' 00:23:01.033 killing process with pid 1509090 00:23:01.033 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1509090 00:23:01.033 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1509090 00:23:01.293 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:01.293 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:01.293 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:01.293 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:01.293 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:01.293 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:01.293 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:01.293 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:03.205 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:03.205 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:23:03.205 00:23:03.205 real 0m49.135s 00:23:03.205 user 2m49.207s 00:23:03.205 sys 0m9.631s 00:23:03.205 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:03.205 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:03.205 ************************************ 00:23:03.205 END TEST nvmf_perf_adq 00:23:03.205 ************************************ 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:03.466 ************************************ 00:23:03.466 START TEST nvmf_shutdown 00:23:03.466 ************************************ 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:03.466 * Looking for test storage... 00:23:03.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:03.466 ************************************ 00:23:03.466 START TEST nvmf_shutdown_tc1 00:23:03.466 ************************************ 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:03.466 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:03.467 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:03.467 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:03.467 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:03.467 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:03.467 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:03.467 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:03.467 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:03.467 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:03.467 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:03.467 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:03.467 11:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:08.747 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:08.747 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:08.747 Found net devices under 0000:86:00.0: cvl_0_0 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:08.747 Found net devices under 0000:86:00.1: cvl_0_1 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:08.747 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:09.007 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:09.007 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:09.007 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:09.007 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:09.008 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:23:09.008 00:23:09.008 --- 10.0.0.2 ping statistics --- 00:23:09.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.008 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:23:09.008 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:09.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:09.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.387 ms 00:23:09.008 00:23:09.008 --- 10.0.0.1 ping statistics --- 00:23:09.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.008 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:23:09.008 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:09.008 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:23:09.008 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:09.008 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:09.008 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:09.008 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:09.008 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:09.008 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:09.008 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:09.008 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:09.008 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:09.008 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:09.008 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:09.008 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1514553 00:23:09.008 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1514553 00:23:09.008 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:09.008 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1514553 ']' 00:23:09.008 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.008 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:09.008 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.008 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:09.008 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:09.008 [2024-07-26 11:11:28.430970] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:09.008 [2024-07-26 11:11:28.431012] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:09.008 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.008 [2024-07-26 11:11:28.487783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:09.267 [2024-07-26 11:11:28.569376] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:09.267 [2024-07-26 11:11:28.569410] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:09.267 [2024-07-26 11:11:28.569417] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:09.267 [2024-07-26 11:11:28.569423] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:09.267 [2024-07-26 11:11:28.569428] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:09.267 [2024-07-26 11:11:28.569550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:09.267 [2024-07-26 11:11:28.569639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:09.267 [2024-07-26 11:11:28.570095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.267 [2024-07-26 11:11:28.570095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:09.837 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:09.837 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:23:09.837 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:09.837 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:09.837 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:09.837 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:09.837 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:09.837 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.837 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:09.837 [2024-07-26 11:11:29.282508] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:09.837 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.837 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:09.837 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:09.837 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:09.837 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:09.837 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:09.837 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.837 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:09.837 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.837 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:09.837 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.837 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:09.837 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.837 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:09.837 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.837 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:09.837 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.837 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:09.837 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.837 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:09.837 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.837 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:09.837 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:09.837 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:10.097 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:10.097 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:10.097 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:10.097 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.097 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:10.097 Malloc1 00:23:10.097 [2024-07-26 11:11:29.378446] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:10.097 Malloc2 00:23:10.097 Malloc3 00:23:10.097 Malloc4 00:23:10.097 Malloc5 00:23:10.097 Malloc6 00:23:10.357 Malloc7 00:23:10.357 Malloc8 00:23:10.357 Malloc9 00:23:10.357 Malloc10 00:23:10.357 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.357 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:10.357 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:10.357 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:10.357 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1514828 00:23:10.357 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1514828 /var/tmp/bdevperf.sock 00:23:10.357 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1514828 ']' 00:23:10.357 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:10.357 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:10.357 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:10.357 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:10.358 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:10.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:10.358 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:10.358 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:10.358 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:10.358 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:10.358 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.358 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.358 { 00:23:10.358 "params": { 00:23:10.358 "name": "Nvme$subsystem", 00:23:10.358 "trtype": "$TEST_TRANSPORT", 00:23:10.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.358 "adrfam": "ipv4", 00:23:10.358 "trsvcid": "$NVMF_PORT", 00:23:10.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.358 "hdgst": ${hdgst:-false}, 00:23:10.358 "ddgst": ${ddgst:-false} 00:23:10.358 }, 00:23:10.358 "method": "bdev_nvme_attach_controller" 00:23:10.358 } 00:23:10.358 EOF 00:23:10.358 )") 00:23:10.358 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:10.358 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.358 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.358 { 00:23:10.358 "params": { 00:23:10.358 "name": "Nvme$subsystem", 00:23:10.358 "trtype": "$TEST_TRANSPORT", 00:23:10.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.358 "adrfam": "ipv4", 00:23:10.358 "trsvcid": "$NVMF_PORT", 00:23:10.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.358 "hdgst": ${hdgst:-false}, 00:23:10.358 "ddgst": ${ddgst:-false} 00:23:10.358 }, 00:23:10.358 "method": "bdev_nvme_attach_controller" 00:23:10.358 } 00:23:10.358 EOF 00:23:10.358 )") 00:23:10.358 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:10.358 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.358 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.358 { 00:23:10.358 "params": { 00:23:10.358 "name": "Nvme$subsystem", 00:23:10.358 "trtype": "$TEST_TRANSPORT", 00:23:10.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.358 "adrfam": "ipv4", 00:23:10.358 "trsvcid": "$NVMF_PORT", 00:23:10.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.358 "hdgst": ${hdgst:-false}, 00:23:10.358 "ddgst": ${ddgst:-false} 00:23:10.358 }, 00:23:10.358 "method": "bdev_nvme_attach_controller" 00:23:10.358 } 00:23:10.358 EOF 00:23:10.358 )") 00:23:10.358 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:10.358 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.358 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.358 { 00:23:10.358 "params": { 00:23:10.358 "name": "Nvme$subsystem", 00:23:10.358 "trtype": "$TEST_TRANSPORT", 00:23:10.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.358 "adrfam": "ipv4", 00:23:10.358 "trsvcid": "$NVMF_PORT", 00:23:10.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.358 "hdgst": ${hdgst:-false}, 00:23:10.358 "ddgst": ${ddgst:-false} 00:23:10.358 }, 00:23:10.358 "method": "bdev_nvme_attach_controller" 00:23:10.358 } 00:23:10.358 EOF 00:23:10.358 )") 00:23:10.358 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:10.358 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.358 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.358 { 00:23:10.358 "params": { 00:23:10.358 "name": "Nvme$subsystem", 00:23:10.358 "trtype": "$TEST_TRANSPORT", 00:23:10.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.358 "adrfam": "ipv4", 00:23:10.358 "trsvcid": "$NVMF_PORT", 00:23:10.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.358 "hdgst": ${hdgst:-false}, 00:23:10.358 "ddgst": ${ddgst:-false} 00:23:10.358 }, 00:23:10.358 "method": "bdev_nvme_attach_controller" 00:23:10.358 } 00:23:10.358 EOF 00:23:10.358 )") 00:23:10.358 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:10.358 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.358 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.358 { 00:23:10.358 "params": { 00:23:10.358 "name": "Nvme$subsystem", 00:23:10.358 "trtype": "$TEST_TRANSPORT", 00:23:10.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.358 "adrfam": "ipv4", 00:23:10.358 "trsvcid": "$NVMF_PORT", 00:23:10.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.358 "hdgst": ${hdgst:-false}, 00:23:10.358 "ddgst": ${ddgst:-false} 00:23:10.358 }, 00:23:10.358 "method": "bdev_nvme_attach_controller" 00:23:10.358 } 00:23:10.358 EOF 00:23:10.358 )") 00:23:10.358 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:10.358 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.358 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.358 { 00:23:10.358 "params": { 00:23:10.358 "name": "Nvme$subsystem", 00:23:10.358 "trtype": "$TEST_TRANSPORT", 00:23:10.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.358 "adrfam": "ipv4", 00:23:10.358 "trsvcid": "$NVMF_PORT", 00:23:10.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.358 "hdgst": ${hdgst:-false}, 00:23:10.358 "ddgst": ${ddgst:-false} 00:23:10.358 }, 00:23:10.358 "method": "bdev_nvme_attach_controller" 00:23:10.358 } 00:23:10.358 EOF 00:23:10.358 )") 00:23:10.358 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:10.358 [2024-07-26 11:11:29.852738] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:10.358 [2024-07-26 11:11:29.852785] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:10.618 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.618 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.618 { 00:23:10.618 "params": { 00:23:10.618 "name": "Nvme$subsystem", 00:23:10.618 "trtype": "$TEST_TRANSPORT", 00:23:10.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.618 "adrfam": "ipv4", 00:23:10.618 "trsvcid": "$NVMF_PORT", 00:23:10.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.618 "hdgst": ${hdgst:-false}, 00:23:10.618 "ddgst": ${ddgst:-false} 00:23:10.618 }, 00:23:10.618 "method": "bdev_nvme_attach_controller" 00:23:10.618 } 00:23:10.618 EOF 00:23:10.618 )") 00:23:10.618 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:10.618 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.618 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.618 { 00:23:10.618 "params": { 00:23:10.618 "name": "Nvme$subsystem", 00:23:10.618 "trtype": "$TEST_TRANSPORT", 00:23:10.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.618 "adrfam": "ipv4", 00:23:10.618 "trsvcid": "$NVMF_PORT", 00:23:10.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.618 "hdgst": ${hdgst:-false}, 00:23:10.618 "ddgst": ${ddgst:-false} 00:23:10.618 }, 00:23:10.618 "method": "bdev_nvme_attach_controller" 00:23:10.618 } 00:23:10.618 EOF 00:23:10.618 )") 00:23:10.618 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:10.618 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:10.618 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:10.618 { 00:23:10.618 "params": { 00:23:10.618 "name": "Nvme$subsystem", 00:23:10.618 "trtype": "$TEST_TRANSPORT", 00:23:10.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.618 "adrfam": "ipv4", 00:23:10.618 "trsvcid": "$NVMF_PORT", 00:23:10.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.618 "hdgst": ${hdgst:-false}, 00:23:10.618 "ddgst": ${ddgst:-false} 00:23:10.618 }, 00:23:10.618 "method": "bdev_nvme_attach_controller" 00:23:10.618 } 00:23:10.618 EOF 00:23:10.618 )") 00:23:10.618 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:10.618 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:10.618 EAL: No free 2048 kB hugepages reported on node 1 00:23:10.618 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:10.618 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:10.619 "params": { 00:23:10.619 "name": "Nvme1", 00:23:10.619 "trtype": "tcp", 00:23:10.619 "traddr": "10.0.0.2", 00:23:10.619 "adrfam": "ipv4", 00:23:10.619 "trsvcid": "4420", 00:23:10.619 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.619 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:10.619 "hdgst": false, 00:23:10.619 "ddgst": false 00:23:10.619 }, 00:23:10.619 "method": "bdev_nvme_attach_controller" 00:23:10.619 },{ 00:23:10.619 "params": { 00:23:10.619 "name": "Nvme2", 00:23:10.619 "trtype": "tcp", 00:23:10.619 "traddr": "10.0.0.2", 00:23:10.619 "adrfam": "ipv4", 00:23:10.619 "trsvcid": "4420", 00:23:10.619 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:10.619 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:10.619 "hdgst": false, 00:23:10.619 "ddgst": false 00:23:10.619 }, 00:23:10.619 "method": "bdev_nvme_attach_controller" 00:23:10.619 },{ 00:23:10.619 "params": { 00:23:10.619 "name": "Nvme3", 00:23:10.619 "trtype": "tcp", 00:23:10.619 "traddr": "10.0.0.2", 00:23:10.619 "adrfam": "ipv4", 00:23:10.619 "trsvcid": "4420", 00:23:10.619 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:10.619 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:10.619 "hdgst": false, 00:23:10.619 "ddgst": false 00:23:10.619 }, 00:23:10.619 "method": "bdev_nvme_attach_controller" 00:23:10.619 },{ 00:23:10.619 "params": { 00:23:10.619 "name": "Nvme4", 00:23:10.619 "trtype": "tcp", 00:23:10.619 "traddr": "10.0.0.2", 00:23:10.619 "adrfam": "ipv4", 00:23:10.619 "trsvcid": "4420", 00:23:10.619 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:10.619 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:10.619 "hdgst": false, 00:23:10.619 "ddgst": false 00:23:10.619 }, 00:23:10.619 "method": "bdev_nvme_attach_controller" 00:23:10.619 },{ 00:23:10.619 "params": { 00:23:10.619 "name": "Nvme5", 00:23:10.619 "trtype": "tcp", 00:23:10.619 "traddr": "10.0.0.2", 00:23:10.619 "adrfam": "ipv4", 00:23:10.619 "trsvcid": "4420", 00:23:10.619 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:10.619 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:10.619 "hdgst": false, 00:23:10.619 "ddgst": false 00:23:10.619 }, 00:23:10.619 "method": "bdev_nvme_attach_controller" 00:23:10.619 },{ 00:23:10.619 "params": { 00:23:10.619 "name": "Nvme6", 00:23:10.619 "trtype": "tcp", 00:23:10.619 "traddr": "10.0.0.2", 00:23:10.619 "adrfam": "ipv4", 00:23:10.619 "trsvcid": "4420", 00:23:10.619 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:10.619 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:10.619 "hdgst": false, 00:23:10.619 "ddgst": false 00:23:10.619 }, 00:23:10.619 "method": "bdev_nvme_attach_controller" 00:23:10.619 },{ 00:23:10.619 "params": { 00:23:10.619 "name": "Nvme7", 00:23:10.619 "trtype": "tcp", 00:23:10.619 "traddr": "10.0.0.2", 00:23:10.619 "adrfam": "ipv4", 00:23:10.619 "trsvcid": "4420", 00:23:10.619 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:10.619 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:10.619 "hdgst": false, 00:23:10.619 "ddgst": false 00:23:10.619 }, 00:23:10.619 "method": "bdev_nvme_attach_controller" 00:23:10.619 },{ 00:23:10.619 "params": { 00:23:10.619 "name": "Nvme8", 00:23:10.619 "trtype": "tcp", 00:23:10.619 "traddr": "10.0.0.2", 00:23:10.619 "adrfam": "ipv4", 00:23:10.619 "trsvcid": "4420", 00:23:10.619 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:10.619 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:10.619 "hdgst": false, 00:23:10.619 "ddgst": false 00:23:10.619 }, 00:23:10.619 "method": "bdev_nvme_attach_controller" 00:23:10.619 },{ 00:23:10.619 "params": { 00:23:10.619 "name": "Nvme9", 00:23:10.619 "trtype": "tcp", 00:23:10.619 "traddr": "10.0.0.2", 00:23:10.619 "adrfam": "ipv4", 00:23:10.619 "trsvcid": "4420", 00:23:10.619 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:10.619 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:10.619 "hdgst": false, 00:23:10.619 "ddgst": false 00:23:10.619 }, 00:23:10.619 "method": "bdev_nvme_attach_controller" 00:23:10.619 },{ 00:23:10.619 "params": { 00:23:10.619 "name": "Nvme10", 00:23:10.619 "trtype": "tcp", 00:23:10.619 "traddr": "10.0.0.2", 00:23:10.619 "adrfam": "ipv4", 00:23:10.619 "trsvcid": "4420", 00:23:10.619 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:10.619 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:10.619 "hdgst": false, 00:23:10.619 "ddgst": false 00:23:10.619 }, 00:23:10.619 "method": "bdev_nvme_attach_controller" 00:23:10.619 }' 00:23:10.619 [2024-07-26 11:11:29.908711] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.619 [2024-07-26 11:11:29.982169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:12.004 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:12.004 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:23:12.004 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:12.004 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.004 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:12.004 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.004 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1514828 00:23:12.004 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:23:12.004 11:11:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:23:12.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1514828 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:12.944 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1514553 00:23:12.944 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:12.944 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:12.944 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:12.944 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:12.944 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:12.944 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:12.944 { 00:23:12.944 "params": { 00:23:12.944 "name": "Nvme$subsystem", 00:23:12.944 "trtype": "$TEST_TRANSPORT", 00:23:12.944 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:12.944 "adrfam": "ipv4", 00:23:12.944 "trsvcid": "$NVMF_PORT", 00:23:12.944 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:12.944 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:12.944 "hdgst": ${hdgst:-false}, 00:23:12.944 "ddgst": ${ddgst:-false} 00:23:12.944 }, 00:23:12.944 "method": "bdev_nvme_attach_controller" 00:23:12.944 } 00:23:12.944 EOF 00:23:12.944 )") 00:23:12.944 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:12.944 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:12.944 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:12.944 { 00:23:12.944 "params": { 00:23:12.944 "name": "Nvme$subsystem", 00:23:12.944 "trtype": "$TEST_TRANSPORT", 00:23:12.944 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:12.944 "adrfam": "ipv4", 00:23:12.944 "trsvcid": "$NVMF_PORT", 00:23:12.944 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:12.944 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:12.944 "hdgst": ${hdgst:-false}, 00:23:12.944 "ddgst": ${ddgst:-false} 00:23:12.944 }, 00:23:12.944 "method": "bdev_nvme_attach_controller" 00:23:12.944 } 00:23:12.944 EOF 00:23:12.944 )") 00:23:12.944 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:12.944 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:12.944 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:12.944 { 00:23:12.944 "params": { 00:23:12.944 "name": "Nvme$subsystem", 00:23:12.944 "trtype": "$TEST_TRANSPORT", 00:23:12.944 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:12.944 "adrfam": "ipv4", 00:23:12.944 "trsvcid": "$NVMF_PORT", 00:23:12.944 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:12.944 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:12.944 "hdgst": ${hdgst:-false}, 00:23:12.944 "ddgst": ${ddgst:-false} 00:23:12.944 }, 00:23:12.944 "method": "bdev_nvme_attach_controller" 00:23:12.945 } 00:23:12.945 EOF 00:23:12.945 )") 00:23:12.945 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:12.945 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:12.945 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:12.945 { 00:23:12.945 "params": { 00:23:12.945 "name": "Nvme$subsystem", 00:23:12.945 "trtype": "$TEST_TRANSPORT", 00:23:12.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:12.945 "adrfam": "ipv4", 00:23:12.945 "trsvcid": "$NVMF_PORT", 00:23:12.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:12.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:12.945 "hdgst": ${hdgst:-false}, 00:23:12.945 "ddgst": ${ddgst:-false} 00:23:12.945 }, 00:23:12.945 "method": "bdev_nvme_attach_controller" 00:23:12.945 } 00:23:12.945 EOF 00:23:12.945 )") 00:23:12.945 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:12.945 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:12.945 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:12.945 { 00:23:12.945 "params": { 00:23:12.945 "name": "Nvme$subsystem", 00:23:12.945 "trtype": "$TEST_TRANSPORT", 00:23:12.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:12.945 "adrfam": "ipv4", 00:23:12.945 "trsvcid": "$NVMF_PORT", 00:23:12.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:12.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:12.945 "hdgst": ${hdgst:-false}, 00:23:12.945 "ddgst": ${ddgst:-false} 00:23:12.945 }, 00:23:12.945 "method": "bdev_nvme_attach_controller" 00:23:12.945 } 00:23:12.945 EOF 00:23:12.945 )") 00:23:12.945 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:12.945 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:12.945 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:12.945 { 00:23:12.945 "params": { 00:23:12.945 "name": "Nvme$subsystem", 00:23:12.945 "trtype": "$TEST_TRANSPORT", 00:23:12.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:12.945 "adrfam": "ipv4", 00:23:12.945 "trsvcid": "$NVMF_PORT", 00:23:12.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:12.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:12.945 "hdgst": ${hdgst:-false}, 00:23:12.945 "ddgst": ${ddgst:-false} 00:23:12.945 }, 00:23:12.945 "method": "bdev_nvme_attach_controller" 00:23:12.945 } 00:23:12.945 EOF 00:23:12.945 )") 00:23:12.945 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:12.945 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:12.945 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:12.945 { 00:23:12.945 "params": { 00:23:12.945 "name": "Nvme$subsystem", 00:23:12.945 "trtype": "$TEST_TRANSPORT", 00:23:12.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:12.945 "adrfam": "ipv4", 00:23:12.945 "trsvcid": "$NVMF_PORT", 00:23:12.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:12.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:12.945 "hdgst": ${hdgst:-false}, 00:23:12.945 "ddgst": ${ddgst:-false} 00:23:12.945 }, 00:23:12.945 "method": "bdev_nvme_attach_controller" 00:23:12.945 } 00:23:12.945 EOF 00:23:12.945 )") 00:23:12.945 [2024-07-26 11:11:32.361355] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:12.945 [2024-07-26 11:11:32.361402] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1515141 ] 00:23:12.945 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:12.945 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:12.945 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:12.945 { 00:23:12.945 "params": { 00:23:12.945 "name": "Nvme$subsystem", 00:23:12.945 "trtype": "$TEST_TRANSPORT", 00:23:12.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:12.945 "adrfam": "ipv4", 00:23:12.945 "trsvcid": "$NVMF_PORT", 00:23:12.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:12.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:12.945 "hdgst": ${hdgst:-false}, 00:23:12.945 "ddgst": ${ddgst:-false} 00:23:12.945 }, 00:23:12.945 "method": "bdev_nvme_attach_controller" 00:23:12.945 } 00:23:12.945 EOF 00:23:12.945 )") 00:23:12.945 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:12.945 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:12.945 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:12.945 { 00:23:12.945 "params": { 00:23:12.945 "name": "Nvme$subsystem", 00:23:12.945 "trtype": "$TEST_TRANSPORT", 00:23:12.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:12.945 "adrfam": "ipv4", 00:23:12.945 "trsvcid": "$NVMF_PORT", 00:23:12.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:12.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:12.945 "hdgst": ${hdgst:-false}, 00:23:12.945 "ddgst": ${ddgst:-false} 00:23:12.945 }, 00:23:12.945 "method": "bdev_nvme_attach_controller" 00:23:12.945 } 00:23:12.945 EOF 00:23:12.945 )") 00:23:12.945 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:12.945 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:12.945 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:12.945 { 00:23:12.945 "params": { 00:23:12.945 "name": "Nvme$subsystem", 00:23:12.945 "trtype": "$TEST_TRANSPORT", 00:23:12.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:12.945 "adrfam": "ipv4", 00:23:12.945 "trsvcid": "$NVMF_PORT", 00:23:12.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:12.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:12.945 "hdgst": ${hdgst:-false}, 00:23:12.945 "ddgst": ${ddgst:-false} 00:23:12.945 }, 00:23:12.945 "method": "bdev_nvme_attach_controller" 00:23:12.945 } 00:23:12.945 EOF 00:23:12.945 )") 00:23:12.945 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:12.945 EAL: No free 2048 kB hugepages reported on node 1 00:23:12.945 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:12.945 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:12.945 11:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:12.945 "params": { 00:23:12.945 "name": "Nvme1", 00:23:12.945 "trtype": "tcp", 00:23:12.945 "traddr": "10.0.0.2", 00:23:12.945 "adrfam": "ipv4", 00:23:12.945 "trsvcid": "4420", 00:23:12.945 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.945 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:12.945 "hdgst": false, 00:23:12.945 "ddgst": false 00:23:12.945 }, 00:23:12.945 "method": "bdev_nvme_attach_controller" 00:23:12.945 },{ 00:23:12.945 "params": { 00:23:12.945 "name": "Nvme2", 00:23:12.945 "trtype": "tcp", 00:23:12.945 "traddr": "10.0.0.2", 00:23:12.945 "adrfam": "ipv4", 00:23:12.945 "trsvcid": "4420", 00:23:12.945 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:12.945 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:12.945 "hdgst": false, 00:23:12.945 "ddgst": false 00:23:12.945 }, 00:23:12.945 "method": "bdev_nvme_attach_controller" 00:23:12.945 },{ 00:23:12.945 "params": { 00:23:12.945 "name": "Nvme3", 00:23:12.945 "trtype": "tcp", 00:23:12.945 "traddr": "10.0.0.2", 00:23:12.945 "adrfam": "ipv4", 00:23:12.945 "trsvcid": "4420", 00:23:12.945 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:12.945 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:12.945 "hdgst": false, 00:23:12.945 "ddgst": false 00:23:12.945 }, 00:23:12.945 "method": "bdev_nvme_attach_controller" 00:23:12.945 },{ 00:23:12.945 "params": { 00:23:12.945 "name": "Nvme4", 00:23:12.945 "trtype": "tcp", 00:23:12.945 "traddr": "10.0.0.2", 00:23:12.945 "adrfam": "ipv4", 00:23:12.945 "trsvcid": "4420", 00:23:12.945 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:12.945 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:12.945 "hdgst": false, 00:23:12.945 "ddgst": false 00:23:12.945 }, 00:23:12.945 "method": "bdev_nvme_attach_controller" 00:23:12.945 },{ 00:23:12.945 "params": { 00:23:12.945 "name": "Nvme5", 00:23:12.945 "trtype": "tcp", 00:23:12.945 "traddr": "10.0.0.2", 00:23:12.945 "adrfam": "ipv4", 00:23:12.945 "trsvcid": "4420", 00:23:12.945 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:12.945 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:12.945 "hdgst": false, 00:23:12.945 "ddgst": false 00:23:12.946 }, 00:23:12.946 "method": "bdev_nvme_attach_controller" 00:23:12.946 },{ 00:23:12.946 "params": { 00:23:12.946 "name": "Nvme6", 00:23:12.946 "trtype": "tcp", 00:23:12.946 "traddr": "10.0.0.2", 00:23:12.946 "adrfam": "ipv4", 00:23:12.946 "trsvcid": "4420", 00:23:12.946 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:12.946 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:12.946 "hdgst": false, 00:23:12.946 "ddgst": false 00:23:12.946 }, 00:23:12.946 "method": "bdev_nvme_attach_controller" 00:23:12.946 },{ 00:23:12.946 "params": { 00:23:12.946 "name": "Nvme7", 00:23:12.946 "trtype": "tcp", 00:23:12.946 "traddr": "10.0.0.2", 00:23:12.946 "adrfam": "ipv4", 00:23:12.946 "trsvcid": "4420", 00:23:12.946 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:12.946 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:12.946 "hdgst": false, 00:23:12.946 "ddgst": false 00:23:12.946 }, 00:23:12.946 "method": "bdev_nvme_attach_controller" 00:23:12.946 },{ 00:23:12.946 "params": { 00:23:12.946 "name": "Nvme8", 00:23:12.946 "trtype": "tcp", 00:23:12.946 "traddr": "10.0.0.2", 00:23:12.946 "adrfam": "ipv4", 00:23:12.946 "trsvcid": "4420", 00:23:12.946 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:12.946 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:12.946 "hdgst": false, 00:23:12.946 "ddgst": false 00:23:12.946 }, 00:23:12.946 "method": "bdev_nvme_attach_controller" 00:23:12.946 },{ 00:23:12.946 "params": { 00:23:12.946 "name": "Nvme9", 00:23:12.946 "trtype": "tcp", 00:23:12.946 "traddr": "10.0.0.2", 00:23:12.946 "adrfam": "ipv4", 00:23:12.946 "trsvcid": "4420", 00:23:12.946 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:12.946 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:12.946 "hdgst": false, 00:23:12.946 "ddgst": false 00:23:12.946 }, 00:23:12.946 "method": "bdev_nvme_attach_controller" 00:23:12.946 },{ 00:23:12.946 "params": { 00:23:12.946 "name": "Nvme10", 00:23:12.946 "trtype": "tcp", 00:23:12.946 "traddr": "10.0.0.2", 00:23:12.946 "adrfam": "ipv4", 00:23:12.946 "trsvcid": "4420", 00:23:12.946 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:12.946 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:12.946 "hdgst": false, 00:23:12.946 "ddgst": false 00:23:12.946 }, 00:23:12.946 "method": "bdev_nvme_attach_controller" 00:23:12.946 }' 00:23:12.946 [2024-07-26 11:11:32.416324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.206 [2024-07-26 11:11:32.492788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.587 Running I/O for 1 seconds... 00:23:15.963 00:23:15.963 Latency(us) 00:23:15.963 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.963 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:15.964 Verification LBA range: start 0x0 length 0x400 00:23:15.964 Nvme1n1 : 1.07 299.44 18.72 0.00 0.00 211551.45 23934.89 201508.95 00:23:15.964 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:15.964 Verification LBA range: start 0x0 length 0x400 00:23:15.964 Nvme2n1 : 1.06 181.47 11.34 0.00 0.00 343578.12 24390.79 341015.15 00:23:15.964 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:15.964 Verification LBA range: start 0x0 length 0x400 00:23:15.964 Nvme3n1 : 1.10 291.02 18.19 0.00 0.00 211361.79 35788.35 203332.56 00:23:15.964 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:15.964 Verification LBA range: start 0x0 length 0x400 00:23:15.964 Nvme4n1 : 1.13 170.28 10.64 0.00 0.00 356005.32 35560.40 362898.48 00:23:15.964 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:15.964 Verification LBA range: start 0x0 length 0x400 00:23:15.964 Nvme5n1 : 1.12 228.54 14.28 0.00 0.00 261615.30 23478.98 260776.29 00:23:15.964 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:15.964 Verification LBA range: start 0x0 length 0x400 00:23:15.964 Nvme6n1 : 1.17 272.72 17.05 0.00 0.00 216712.06 18578.03 229774.91 00:23:15.964 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:15.964 Verification LBA range: start 0x0 length 0x400 00:23:15.964 Nvme7n1 : 1.15 222.45 13.90 0.00 0.00 260600.65 21769.35 273541.57 00:23:15.964 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:15.964 Verification LBA range: start 0x0 length 0x400 00:23:15.964 Nvme8n1 : 1.15 279.29 17.46 0.00 0.00 204671.07 22909.11 219745.06 00:23:15.964 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:15.964 Verification LBA range: start 0x0 length 0x400 00:23:15.964 Nvme9n1 : 1.15 278.32 17.39 0.00 0.00 202497.34 20857.54 227951.30 00:23:15.964 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:15.964 Verification LBA range: start 0x0 length 0x400 00:23:15.964 Nvme10n1 : 1.18 271.48 16.97 0.00 0.00 205111.47 14702.86 242540.19 00:23:15.964 =================================================================================================================== 00:23:15.964 Total : 2495.01 155.94 0.00 0.00 237435.00 14702.86 362898.48 00:23:15.964 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:23:15.964 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:15.964 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:15.964 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:15.964 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:15.964 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:15.964 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:23:15.964 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:15.964 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:23:15.964 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:15.964 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:15.964 rmmod nvme_tcp 00:23:15.964 rmmod nvme_fabrics 00:23:15.964 rmmod nvme_keyring 00:23:15.964 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:15.964 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:23:15.964 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:23:15.964 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1514553 ']' 00:23:15.964 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1514553 00:23:15.964 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 1514553 ']' 00:23:15.964 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 1514553 00:23:15.964 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:23:15.964 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:15.964 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1514553 00:23:16.223 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:16.223 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:16.223 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1514553' 00:23:16.223 killing process with pid 1514553 00:23:16.223 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 1514553 00:23:16.223 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 1514553 00:23:16.482 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:16.482 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:16.482 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:16.482 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:16.482 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:16.482 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.482 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:16.482 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.021 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:19.021 00:23:19.021 real 0m15.072s 00:23:19.021 user 0m34.628s 00:23:19.021 sys 0m5.463s 00:23:19.021 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:19.021 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:19.021 ************************************ 00:23:19.021 END TEST nvmf_shutdown_tc1 00:23:19.021 ************************************ 00:23:19.021 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:19.021 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:19.021 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:19.021 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:19.021 ************************************ 00:23:19.021 START TEST nvmf_shutdown_tc2 00:23:19.021 ************************************ 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:19.021 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:19.021 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:19.021 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:19.021 Found net devices under 0000:86:00.0: cvl_0_0 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:19.022 Found net devices under 0000:86:00.1: cvl_0_1 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:19.022 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:19.022 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:23:19.022 00:23:19.022 --- 10.0.0.2 ping statistics --- 00:23:19.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:19.022 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:19.022 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:19.022 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.421 ms 00:23:19.022 00:23:19.022 --- 10.0.0.1 ping statistics --- 00:23:19.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:19.022 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1516346 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1516346 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1516346 ']' 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:19.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:19.022 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:19.022 [2024-07-26 11:11:38.357697] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:19.022 [2024-07-26 11:11:38.357740] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:19.022 EAL: No free 2048 kB hugepages reported on node 1 00:23:19.022 [2024-07-26 11:11:38.413496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:19.022 [2024-07-26 11:11:38.493728] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:19.022 [2024-07-26 11:11:38.493766] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:19.022 [2024-07-26 11:11:38.493773] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:19.022 [2024-07-26 11:11:38.493779] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:19.022 [2024-07-26 11:11:38.493784] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:19.022 [2024-07-26 11:11:38.493880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:19.022 [2024-07-26 11:11:38.493975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:19.022 [2024-07-26 11:11:38.493993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:19.022 [2024-07-26 11:11:38.493995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:19.963 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:19.963 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:23:19.963 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:19.963 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:19.963 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:19.963 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:19.963 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:19.963 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.963 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:19.963 [2024-07-26 11:11:39.223382] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:19.963 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.963 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:19.963 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:19.963 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:19.963 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:19.963 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:19.963 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.963 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:19.963 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.963 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:19.963 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.963 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:19.963 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.963 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:19.963 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.963 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:19.963 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.963 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:19.963 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.963 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:19.963 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.963 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:19.963 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.963 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:19.963 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.963 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:19.963 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:19.963 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.963 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:19.963 Malloc1 00:23:19.963 [2024-07-26 11:11:39.319342] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:19.963 Malloc2 00:23:19.963 Malloc3 00:23:19.963 Malloc4 00:23:20.246 Malloc5 00:23:20.246 Malloc6 00:23:20.246 Malloc7 00:23:20.246 Malloc8 00:23:20.246 Malloc9 00:23:20.246 Malloc10 00:23:20.527 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.527 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:20.527 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:20.527 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:20.527 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1516625 00:23:20.527 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1516625 /var/tmp/bdevperf.sock 00:23:20.527 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1516625 ']' 00:23:20.527 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:20.527 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:20.527 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:20.527 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:20.527 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:20.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:20.527 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:23:20.527 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:20.527 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:23:20.527 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:20.527 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:20.527 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:20.527 { 00:23:20.527 "params": { 00:23:20.527 "name": "Nvme$subsystem", 00:23:20.527 "trtype": "$TEST_TRANSPORT", 00:23:20.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:20.527 "adrfam": "ipv4", 00:23:20.527 "trsvcid": "$NVMF_PORT", 00:23:20.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:20.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:20.527 "hdgst": ${hdgst:-false}, 00:23:20.527 "ddgst": ${ddgst:-false} 00:23:20.527 }, 00:23:20.527 "method": "bdev_nvme_attach_controller" 00:23:20.527 } 00:23:20.527 EOF 00:23:20.527 )") 00:23:20.527 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:20.527 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:20.527 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:20.527 { 00:23:20.527 "params": { 00:23:20.527 "name": "Nvme$subsystem", 00:23:20.527 "trtype": "$TEST_TRANSPORT", 00:23:20.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:20.527 "adrfam": "ipv4", 00:23:20.527 "trsvcid": "$NVMF_PORT", 00:23:20.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:20.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:20.527 "hdgst": ${hdgst:-false}, 00:23:20.527 "ddgst": ${ddgst:-false} 00:23:20.527 }, 00:23:20.527 "method": "bdev_nvme_attach_controller" 00:23:20.527 } 00:23:20.527 EOF 00:23:20.527 )") 00:23:20.527 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:20.527 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:20.527 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:20.527 { 00:23:20.527 "params": { 00:23:20.527 "name": "Nvme$subsystem", 00:23:20.527 "trtype": "$TEST_TRANSPORT", 00:23:20.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:20.527 "adrfam": "ipv4", 00:23:20.527 "trsvcid": "$NVMF_PORT", 00:23:20.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:20.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:20.527 "hdgst": ${hdgst:-false}, 00:23:20.527 "ddgst": ${ddgst:-false} 00:23:20.527 }, 00:23:20.527 "method": "bdev_nvme_attach_controller" 00:23:20.527 } 00:23:20.527 EOF 00:23:20.527 )") 00:23:20.527 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:20.527 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:20.527 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:20.527 { 00:23:20.527 "params": { 00:23:20.527 "name": "Nvme$subsystem", 00:23:20.527 "trtype": "$TEST_TRANSPORT", 00:23:20.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:20.527 "adrfam": "ipv4", 00:23:20.527 "trsvcid": "$NVMF_PORT", 00:23:20.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:20.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:20.527 "hdgst": ${hdgst:-false}, 00:23:20.527 "ddgst": ${ddgst:-false} 00:23:20.527 }, 00:23:20.527 "method": "bdev_nvme_attach_controller" 00:23:20.527 } 00:23:20.527 EOF 00:23:20.527 )") 00:23:20.527 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:20.527 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:20.527 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:20.527 { 00:23:20.527 "params": { 00:23:20.527 "name": "Nvme$subsystem", 00:23:20.527 "trtype": "$TEST_TRANSPORT", 00:23:20.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:20.527 "adrfam": "ipv4", 00:23:20.527 "trsvcid": "$NVMF_PORT", 00:23:20.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:20.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:20.527 "hdgst": ${hdgst:-false}, 00:23:20.527 "ddgst": ${ddgst:-false} 00:23:20.527 }, 00:23:20.527 "method": "bdev_nvme_attach_controller" 00:23:20.527 } 00:23:20.527 EOF 00:23:20.527 )") 00:23:20.527 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:20.527 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:20.527 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:20.527 { 00:23:20.527 "params": { 00:23:20.527 "name": "Nvme$subsystem", 00:23:20.527 "trtype": "$TEST_TRANSPORT", 00:23:20.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:20.527 "adrfam": "ipv4", 00:23:20.527 "trsvcid": "$NVMF_PORT", 00:23:20.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:20.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:20.527 "hdgst": ${hdgst:-false}, 00:23:20.527 "ddgst": ${ddgst:-false} 00:23:20.527 }, 00:23:20.527 "method": "bdev_nvme_attach_controller" 00:23:20.527 } 00:23:20.527 EOF 00:23:20.527 )") 00:23:20.527 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:20.527 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:20.527 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:20.527 { 00:23:20.527 "params": { 00:23:20.527 "name": "Nvme$subsystem", 00:23:20.527 "trtype": "$TEST_TRANSPORT", 00:23:20.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:20.527 "adrfam": "ipv4", 00:23:20.527 "trsvcid": "$NVMF_PORT", 00:23:20.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:20.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:20.527 "hdgst": ${hdgst:-false}, 00:23:20.527 "ddgst": ${ddgst:-false} 00:23:20.527 }, 00:23:20.527 "method": "bdev_nvme_attach_controller" 00:23:20.527 } 00:23:20.527 EOF 00:23:20.527 )") 00:23:20.527 [2024-07-26 11:11:39.798338] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:20.527 [2024-07-26 11:11:39.798393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1516625 ] 00:23:20.527 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:20.527 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:20.527 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:20.527 { 00:23:20.527 "params": { 00:23:20.527 "name": "Nvme$subsystem", 00:23:20.527 "trtype": "$TEST_TRANSPORT", 00:23:20.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:20.527 "adrfam": "ipv4", 00:23:20.527 "trsvcid": "$NVMF_PORT", 00:23:20.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:20.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:20.527 "hdgst": ${hdgst:-false}, 00:23:20.527 "ddgst": ${ddgst:-false} 00:23:20.527 }, 00:23:20.527 "method": "bdev_nvme_attach_controller" 00:23:20.527 } 00:23:20.528 EOF 00:23:20.528 )") 00:23:20.528 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:20.528 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:20.528 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:20.528 { 00:23:20.528 "params": { 00:23:20.528 "name": "Nvme$subsystem", 00:23:20.528 "trtype": "$TEST_TRANSPORT", 00:23:20.528 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:20.528 "adrfam": "ipv4", 00:23:20.528 "trsvcid": "$NVMF_PORT", 00:23:20.528 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:20.528 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:20.528 "hdgst": ${hdgst:-false}, 00:23:20.528 "ddgst": ${ddgst:-false} 00:23:20.528 }, 00:23:20.528 "method": "bdev_nvme_attach_controller" 00:23:20.528 } 00:23:20.528 EOF 00:23:20.528 )") 00:23:20.528 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:20.528 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:20.528 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:20.528 { 00:23:20.528 "params": { 00:23:20.528 "name": "Nvme$subsystem", 00:23:20.528 "trtype": "$TEST_TRANSPORT", 00:23:20.528 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:20.528 "adrfam": "ipv4", 00:23:20.528 "trsvcid": "$NVMF_PORT", 00:23:20.528 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:20.528 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:20.528 "hdgst": ${hdgst:-false}, 00:23:20.528 "ddgst": ${ddgst:-false} 00:23:20.528 }, 00:23:20.528 "method": "bdev_nvme_attach_controller" 00:23:20.528 } 00:23:20.528 EOF 00:23:20.528 )") 00:23:20.528 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:20.528 EAL: No free 2048 kB hugepages reported on node 1 00:23:20.528 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:23:20.528 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:23:20.528 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:20.528 "params": { 00:23:20.528 "name": "Nvme1", 00:23:20.528 "trtype": "tcp", 00:23:20.528 "traddr": "10.0.0.2", 00:23:20.528 "adrfam": "ipv4", 00:23:20.528 "trsvcid": "4420", 00:23:20.528 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.528 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:20.528 "hdgst": false, 00:23:20.528 "ddgst": false 00:23:20.528 }, 00:23:20.528 "method": "bdev_nvme_attach_controller" 00:23:20.528 },{ 00:23:20.528 "params": { 00:23:20.528 "name": "Nvme2", 00:23:20.528 "trtype": "tcp", 00:23:20.528 "traddr": "10.0.0.2", 00:23:20.528 "adrfam": "ipv4", 00:23:20.528 "trsvcid": "4420", 00:23:20.528 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:20.528 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:20.528 "hdgst": false, 00:23:20.528 "ddgst": false 00:23:20.528 }, 00:23:20.528 "method": "bdev_nvme_attach_controller" 00:23:20.528 },{ 00:23:20.528 "params": { 00:23:20.528 "name": "Nvme3", 00:23:20.528 "trtype": "tcp", 00:23:20.528 "traddr": "10.0.0.2", 00:23:20.528 "adrfam": "ipv4", 00:23:20.528 "trsvcid": "4420", 00:23:20.528 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:20.528 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:20.528 "hdgst": false, 00:23:20.528 "ddgst": false 00:23:20.528 }, 00:23:20.528 "method": "bdev_nvme_attach_controller" 00:23:20.528 },{ 00:23:20.528 "params": { 00:23:20.528 "name": "Nvme4", 00:23:20.528 "trtype": "tcp", 00:23:20.528 "traddr": "10.0.0.2", 00:23:20.528 "adrfam": "ipv4", 00:23:20.528 "trsvcid": "4420", 00:23:20.528 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:20.528 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:20.528 "hdgst": false, 00:23:20.528 "ddgst": false 00:23:20.528 }, 00:23:20.528 "method": "bdev_nvme_attach_controller" 00:23:20.528 },{ 00:23:20.528 "params": { 00:23:20.528 "name": "Nvme5", 00:23:20.528 "trtype": "tcp", 00:23:20.528 "traddr": "10.0.0.2", 00:23:20.528 "adrfam": "ipv4", 00:23:20.528 "trsvcid": "4420", 00:23:20.528 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:20.528 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:20.528 "hdgst": false, 00:23:20.528 "ddgst": false 00:23:20.528 }, 00:23:20.528 "method": "bdev_nvme_attach_controller" 00:23:20.528 },{ 00:23:20.528 "params": { 00:23:20.528 "name": "Nvme6", 00:23:20.528 "trtype": "tcp", 00:23:20.528 "traddr": "10.0.0.2", 00:23:20.528 "adrfam": "ipv4", 00:23:20.528 "trsvcid": "4420", 00:23:20.528 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:20.528 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:20.528 "hdgst": false, 00:23:20.528 "ddgst": false 00:23:20.528 }, 00:23:20.528 "method": "bdev_nvme_attach_controller" 00:23:20.528 },{ 00:23:20.528 "params": { 00:23:20.528 "name": "Nvme7", 00:23:20.528 "trtype": "tcp", 00:23:20.528 "traddr": "10.0.0.2", 00:23:20.528 "adrfam": "ipv4", 00:23:20.528 "trsvcid": "4420", 00:23:20.528 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:20.528 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:20.528 "hdgst": false, 00:23:20.528 "ddgst": false 00:23:20.528 }, 00:23:20.528 "method": "bdev_nvme_attach_controller" 00:23:20.528 },{ 00:23:20.528 "params": { 00:23:20.528 "name": "Nvme8", 00:23:20.528 "trtype": "tcp", 00:23:20.528 "traddr": "10.0.0.2", 00:23:20.528 "adrfam": "ipv4", 00:23:20.528 "trsvcid": "4420", 00:23:20.528 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:20.528 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:20.528 "hdgst": false, 00:23:20.528 "ddgst": false 00:23:20.528 }, 00:23:20.528 "method": "bdev_nvme_attach_controller" 00:23:20.528 },{ 00:23:20.528 "params": { 00:23:20.528 "name": "Nvme9", 00:23:20.528 "trtype": "tcp", 00:23:20.528 "traddr": "10.0.0.2", 00:23:20.528 "adrfam": "ipv4", 00:23:20.528 "trsvcid": "4420", 00:23:20.528 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:20.528 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:20.528 "hdgst": false, 00:23:20.528 "ddgst": false 00:23:20.528 }, 00:23:20.528 "method": "bdev_nvme_attach_controller" 00:23:20.528 },{ 00:23:20.528 "params": { 00:23:20.528 "name": "Nvme10", 00:23:20.528 "trtype": "tcp", 00:23:20.528 "traddr": "10.0.0.2", 00:23:20.528 "adrfam": "ipv4", 00:23:20.528 "trsvcid": "4420", 00:23:20.528 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:20.528 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:20.528 "hdgst": false, 00:23:20.528 "ddgst": false 00:23:20.528 }, 00:23:20.528 "method": "bdev_nvme_attach_controller" 00:23:20.528 }' 00:23:20.528 [2024-07-26 11:11:39.855155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.528 [2024-07-26 11:11:39.930286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.907 Running I/O for 10 seconds... 00:23:21.907 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:21.907 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:23:21.907 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:21.907 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.907 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:22.168 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.168 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:22.168 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:22.168 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:22.168 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:23:22.168 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:23:22.168 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:22.168 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:22.168 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:22.168 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.168 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:22.168 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:22.168 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.168 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:22.168 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:22.168 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:22.428 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:22.428 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:22.428 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:22.428 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:22.428 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.428 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:22.428 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.428 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:22.428 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:22.428 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:22.688 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:22.688 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:22.688 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:22.688 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:22.688 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.688 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:22.688 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.688 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:22.688 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:22.688 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:23:22.688 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:23:22.688 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:23:22.688 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1516625 00:23:22.688 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1516625 ']' 00:23:22.688 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1516625 00:23:22.688 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:23:22.688 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:22.688 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1516625 00:23:22.688 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:22.688 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:22.688 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1516625' 00:23:22.688 killing process with pid 1516625 00:23:22.688 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1516625 00:23:22.688 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1516625 00:23:22.948 Received shutdown signal, test time was about 0.935079 seconds 00:23:22.948 00:23:22.949 Latency(us) 00:23:22.949 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.949 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:22.949 Verification LBA range: start 0x0 length 0x400 00:23:22.949 Nvme1n1 : 0.90 213.43 13.34 0.00 0.00 294587.88 23251.03 238892.97 00:23:22.949 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:22.949 Verification LBA range: start 0x0 length 0x400 00:23:22.949 Nvme2n1 : 0.88 291.69 18.23 0.00 0.00 212894.50 22339.23 217009.64 00:23:22.949 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:22.949 Verification LBA range: start 0x0 length 0x400 00:23:22.949 Nvme3n1 : 0.90 284.05 17.75 0.00 0.00 214454.98 21313.45 235245.75 00:23:22.949 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:22.949 Verification LBA range: start 0x0 length 0x400 00:23:22.949 Nvme4n1 : 0.91 211.54 13.22 0.00 0.00 283420.05 21769.35 280836.01 00:23:22.949 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:22.949 Verification LBA range: start 0x0 length 0x400 00:23:22.949 Nvme5n1 : 0.92 279.19 17.45 0.00 0.00 210889.24 21199.47 218833.25 00:23:22.949 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:22.949 Verification LBA range: start 0x0 length 0x400 00:23:22.949 Nvme6n1 : 0.93 205.48 12.84 0.00 0.00 269091.17 36016.31 262599.90 00:23:22.949 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:22.949 Verification LBA range: start 0x0 length 0x400 00:23:22.949 Nvme7n1 : 0.92 276.40 17.27 0.00 0.00 204997.28 15044.79 244363.80 00:23:22.949 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:22.949 Verification LBA range: start 0x0 length 0x400 00:23:22.949 Nvme8n1 : 0.89 289.22 18.08 0.00 0.00 190889.18 23251.03 216097.84 00:23:22.949 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:22.949 Verification LBA range: start 0x0 length 0x400 00:23:22.949 Nvme9n1 : 0.90 212.24 13.27 0.00 0.00 256113.38 39207.62 262599.90 00:23:22.949 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:22.949 Verification LBA range: start 0x0 length 0x400 00:23:22.949 Nvme10n1 : 0.93 207.15 12.95 0.00 0.00 258145.95 20857.54 313660.99 00:23:22.949 =================================================================================================================== 00:23:22.949 Total : 2470.39 154.40 0.00 0.00 234886.94 15044.79 313660.99 00:23:23.208 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:23:24.144 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1516346 00:23:24.144 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:23:24.144 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:24.144 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:24.144 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:24.144 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:24.144 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:24.144 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:23:24.144 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:24.144 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:23:24.144 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:24.144 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:24.144 rmmod nvme_tcp 00:23:24.144 rmmod nvme_fabrics 00:23:24.144 rmmod nvme_keyring 00:23:24.144 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:24.144 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:23:24.144 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:23:24.144 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1516346 ']' 00:23:24.144 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1516346 00:23:24.145 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1516346 ']' 00:23:24.145 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1516346 00:23:24.145 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:23:24.145 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:24.145 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1516346 00:23:24.145 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:24.145 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:24.145 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1516346' 00:23:24.145 killing process with pid 1516346 00:23:24.145 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1516346 00:23:24.145 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1516346 00:23:24.712 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:24.712 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:24.712 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:24.712 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:24.712 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:24.713 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.713 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:24.713 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:26.622 00:23:26.622 real 0m7.997s 00:23:26.622 user 0m24.274s 00:23:26.622 sys 0m1.374s 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:26.622 ************************************ 00:23:26.622 END TEST nvmf_shutdown_tc2 00:23:26.622 ************************************ 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:26.622 ************************************ 00:23:26.622 START TEST nvmf_shutdown_tc3 00:23:26.622 ************************************ 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:26.622 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:26.622 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:26.622 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:26.623 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:26.623 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:26.623 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:26.623 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:26.623 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:26.623 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:26.623 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:26.623 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:26.623 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:26.623 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:26.623 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:26.623 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:26.623 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:26.623 Found net devices under 0000:86:00.0: cvl_0_0 00:23:26.623 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:26.623 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:26.623 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:26.623 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:26.623 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:26.623 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:26.623 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:26.623 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:26.623 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:26.623 Found net devices under 0000:86:00.1: cvl_0_1 00:23:26.623 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:26.623 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:26.623 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:26.623 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:26.623 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:26.623 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:26.623 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:26.623 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:26.623 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:26.623 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:26.623 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:26.623 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:26.623 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:26.623 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:26.623 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:26.623 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:26.623 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:26.883 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:26.883 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:26.883 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:26.883 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:26.883 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:26.883 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:26.883 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:26.883 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:26.883 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:26.883 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:26.883 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:23:26.883 00:23:26.883 --- 10.0.0.2 ping statistics --- 00:23:26.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.883 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:23:26.883 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:27.142 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:27.142 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.361 ms 00:23:27.142 00:23:27.142 --- 10.0.0.1 ping statistics --- 00:23:27.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.142 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:23:27.142 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:27.142 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:23:27.142 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:27.142 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:27.142 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:27.142 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:27.142 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:27.143 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:27.143 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:27.143 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:27.143 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:27.143 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:27.143 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:27.143 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:27.143 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1517697 00:23:27.143 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1517697 00:23:27.143 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1517697 ']' 00:23:27.143 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.143 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:27.143 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.143 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:27.143 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:27.143 [2024-07-26 11:11:46.458377] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:27.143 [2024-07-26 11:11:46.458423] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:27.143 EAL: No free 2048 kB hugepages reported on node 1 00:23:27.143 [2024-07-26 11:11:46.516525] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:27.143 [2024-07-26 11:11:46.589490] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:27.143 [2024-07-26 11:11:46.589531] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:27.143 [2024-07-26 11:11:46.589538] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:27.143 [2024-07-26 11:11:46.589544] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:27.143 [2024-07-26 11:11:46.589549] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:27.143 [2024-07-26 11:11:46.589597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:27.143 [2024-07-26 11:11:46.589689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:27.143 [2024-07-26 11:11:46.589797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:27.143 [2024-07-26 11:11:46.589798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:28.080 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:28.080 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:23:28.080 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:28.080 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:28.080 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:28.080 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:28.080 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:28.080 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.080 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:28.080 [2024-07-26 11:11:47.324324] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:28.080 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.080 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:28.080 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:28.080 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:28.080 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:28.080 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:28.080 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:28.080 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:28.080 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:28.080 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:28.080 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:28.080 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:28.080 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:28.080 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:28.080 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:28.080 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:28.080 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:28.080 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:28.080 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:28.080 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:28.080 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:28.080 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:28.080 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:28.080 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:28.080 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:28.080 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:28.080 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:28.080 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.080 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:28.080 Malloc1 00:23:28.080 [2024-07-26 11:11:47.420289] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:28.080 Malloc2 00:23:28.080 Malloc3 00:23:28.080 Malloc4 00:23:28.080 Malloc5 00:23:28.339 Malloc6 00:23:28.339 Malloc7 00:23:28.339 Malloc8 00:23:28.339 Malloc9 00:23:28.339 Malloc10 00:23:28.339 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.339 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:28.339 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:28.339 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:28.599 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1517980 00:23:28.599 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1517980 /var/tmp/bdevperf.sock 00:23:28.599 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1517980 ']' 00:23:28.599 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:28.599 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:28.599 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:28.599 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:28.600 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:28.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:28.600 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:23:28.600 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:28.600 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:23:28.600 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:28.600 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:28.600 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:28.600 { 00:23:28.600 "params": { 00:23:28.600 "name": "Nvme$subsystem", 00:23:28.600 "trtype": "$TEST_TRANSPORT", 00:23:28.600 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.600 "adrfam": "ipv4", 00:23:28.600 "trsvcid": "$NVMF_PORT", 00:23:28.600 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.600 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.600 "hdgst": ${hdgst:-false}, 00:23:28.600 "ddgst": ${ddgst:-false} 00:23:28.600 }, 00:23:28.600 "method": "bdev_nvme_attach_controller" 00:23:28.600 } 00:23:28.600 EOF 00:23:28.600 )") 00:23:28.600 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:28.600 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:28.600 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:28.600 { 00:23:28.600 "params": { 00:23:28.600 "name": "Nvme$subsystem", 00:23:28.600 "trtype": "$TEST_TRANSPORT", 00:23:28.600 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.600 "adrfam": "ipv4", 00:23:28.600 "trsvcid": "$NVMF_PORT", 00:23:28.600 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.600 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.600 "hdgst": ${hdgst:-false}, 00:23:28.600 "ddgst": ${ddgst:-false} 00:23:28.600 }, 00:23:28.600 "method": "bdev_nvme_attach_controller" 00:23:28.600 } 00:23:28.600 EOF 00:23:28.600 )") 00:23:28.600 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:28.600 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:28.600 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:28.600 { 00:23:28.600 "params": { 00:23:28.600 "name": "Nvme$subsystem", 00:23:28.600 "trtype": "$TEST_TRANSPORT", 00:23:28.600 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.600 "adrfam": "ipv4", 00:23:28.600 "trsvcid": "$NVMF_PORT", 00:23:28.600 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.600 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.600 "hdgst": ${hdgst:-false}, 00:23:28.600 "ddgst": ${ddgst:-false} 00:23:28.600 }, 00:23:28.600 "method": "bdev_nvme_attach_controller" 00:23:28.600 } 00:23:28.600 EOF 00:23:28.600 )") 00:23:28.600 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:28.600 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:28.600 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:28.600 { 00:23:28.600 "params": { 00:23:28.600 "name": "Nvme$subsystem", 00:23:28.600 "trtype": "$TEST_TRANSPORT", 00:23:28.600 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.600 "adrfam": "ipv4", 00:23:28.600 "trsvcid": "$NVMF_PORT", 00:23:28.600 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.600 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.600 "hdgst": ${hdgst:-false}, 00:23:28.600 "ddgst": ${ddgst:-false} 00:23:28.600 }, 00:23:28.600 "method": "bdev_nvme_attach_controller" 00:23:28.600 } 00:23:28.600 EOF 00:23:28.600 )") 00:23:28.600 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:28.600 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:28.600 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:28.600 { 00:23:28.600 "params": { 00:23:28.600 "name": "Nvme$subsystem", 00:23:28.600 "trtype": "$TEST_TRANSPORT", 00:23:28.600 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.600 "adrfam": "ipv4", 00:23:28.600 "trsvcid": "$NVMF_PORT", 00:23:28.600 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.600 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.600 "hdgst": ${hdgst:-false}, 00:23:28.600 "ddgst": ${ddgst:-false} 00:23:28.600 }, 00:23:28.600 "method": "bdev_nvme_attach_controller" 00:23:28.600 } 00:23:28.600 EOF 00:23:28.600 )") 00:23:28.600 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:28.600 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:28.600 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:28.600 { 00:23:28.600 "params": { 00:23:28.600 "name": "Nvme$subsystem", 00:23:28.600 "trtype": "$TEST_TRANSPORT", 00:23:28.600 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.600 "adrfam": "ipv4", 00:23:28.600 "trsvcid": "$NVMF_PORT", 00:23:28.600 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.601 "hdgst": ${hdgst:-false}, 00:23:28.601 "ddgst": ${ddgst:-false} 00:23:28.601 }, 00:23:28.601 "method": "bdev_nvme_attach_controller" 00:23:28.601 } 00:23:28.601 EOF 00:23:28.601 )") 00:23:28.601 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:28.601 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:28.601 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:28.601 { 00:23:28.601 "params": { 00:23:28.601 "name": "Nvme$subsystem", 00:23:28.601 "trtype": "$TEST_TRANSPORT", 00:23:28.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.601 "adrfam": "ipv4", 00:23:28.601 "trsvcid": "$NVMF_PORT", 00:23:28.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.601 "hdgst": ${hdgst:-false}, 00:23:28.601 "ddgst": ${ddgst:-false} 00:23:28.601 }, 00:23:28.601 "method": "bdev_nvme_attach_controller" 00:23:28.601 } 00:23:28.601 EOF 00:23:28.601 )") 00:23:28.601 [2024-07-26 11:11:47.888628] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:28.601 [2024-07-26 11:11:47.888681] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1517980 ] 00:23:28.601 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:28.601 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:28.601 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:28.601 { 00:23:28.601 "params": { 00:23:28.601 "name": "Nvme$subsystem", 00:23:28.601 "trtype": "$TEST_TRANSPORT", 00:23:28.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.601 "adrfam": "ipv4", 00:23:28.601 "trsvcid": "$NVMF_PORT", 00:23:28.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.601 "hdgst": ${hdgst:-false}, 00:23:28.601 "ddgst": ${ddgst:-false} 00:23:28.601 }, 00:23:28.601 "method": "bdev_nvme_attach_controller" 00:23:28.601 } 00:23:28.601 EOF 00:23:28.601 )") 00:23:28.601 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:28.601 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:28.601 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:28.601 { 00:23:28.601 "params": { 00:23:28.601 "name": "Nvme$subsystem", 00:23:28.601 "trtype": "$TEST_TRANSPORT", 00:23:28.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.601 "adrfam": "ipv4", 00:23:28.601 "trsvcid": "$NVMF_PORT", 00:23:28.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.601 "hdgst": ${hdgst:-false}, 00:23:28.601 "ddgst": ${ddgst:-false} 00:23:28.601 }, 00:23:28.601 "method": "bdev_nvme_attach_controller" 00:23:28.601 } 00:23:28.601 EOF 00:23:28.601 )") 00:23:28.601 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:28.601 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:28.601 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:28.601 { 00:23:28.601 "params": { 00:23:28.601 "name": "Nvme$subsystem", 00:23:28.601 "trtype": "$TEST_TRANSPORT", 00:23:28.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.601 "adrfam": "ipv4", 00:23:28.601 "trsvcid": "$NVMF_PORT", 00:23:28.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.601 "hdgst": ${hdgst:-false}, 00:23:28.601 "ddgst": ${ddgst:-false} 00:23:28.601 }, 00:23:28.601 "method": "bdev_nvme_attach_controller" 00:23:28.601 } 00:23:28.601 EOF 00:23:28.601 )") 00:23:28.601 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:28.601 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:23:28.601 EAL: No free 2048 kB hugepages reported on node 1 00:23:28.601 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:23:28.601 11:11:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:28.601 "params": { 00:23:28.601 "name": "Nvme1", 00:23:28.601 "trtype": "tcp", 00:23:28.601 "traddr": "10.0.0.2", 00:23:28.601 "adrfam": "ipv4", 00:23:28.601 "trsvcid": "4420", 00:23:28.601 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.601 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:28.601 "hdgst": false, 00:23:28.601 "ddgst": false 00:23:28.601 }, 00:23:28.601 "method": "bdev_nvme_attach_controller" 00:23:28.601 },{ 00:23:28.601 "params": { 00:23:28.601 "name": "Nvme2", 00:23:28.601 "trtype": "tcp", 00:23:28.601 "traddr": "10.0.0.2", 00:23:28.601 "adrfam": "ipv4", 00:23:28.601 "trsvcid": "4420", 00:23:28.601 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:28.601 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:28.601 "hdgst": false, 00:23:28.601 "ddgst": false 00:23:28.601 }, 00:23:28.601 "method": "bdev_nvme_attach_controller" 00:23:28.601 },{ 00:23:28.601 "params": { 00:23:28.601 "name": "Nvme3", 00:23:28.601 "trtype": "tcp", 00:23:28.602 "traddr": "10.0.0.2", 00:23:28.602 "adrfam": "ipv4", 00:23:28.602 "trsvcid": "4420", 00:23:28.602 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:28.602 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:28.602 "hdgst": false, 00:23:28.602 "ddgst": false 00:23:28.602 }, 00:23:28.602 "method": "bdev_nvme_attach_controller" 00:23:28.602 },{ 00:23:28.602 "params": { 00:23:28.602 "name": "Nvme4", 00:23:28.602 "trtype": "tcp", 00:23:28.602 "traddr": "10.0.0.2", 00:23:28.602 "adrfam": "ipv4", 00:23:28.602 "trsvcid": "4420", 00:23:28.602 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:28.602 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:28.602 "hdgst": false, 00:23:28.602 "ddgst": false 00:23:28.602 }, 00:23:28.602 "method": "bdev_nvme_attach_controller" 00:23:28.602 },{ 00:23:28.602 "params": { 00:23:28.602 "name": "Nvme5", 00:23:28.602 "trtype": "tcp", 00:23:28.602 "traddr": "10.0.0.2", 00:23:28.602 "adrfam": "ipv4", 00:23:28.602 "trsvcid": "4420", 00:23:28.602 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:28.602 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:28.602 "hdgst": false, 00:23:28.602 "ddgst": false 00:23:28.602 }, 00:23:28.602 "method": "bdev_nvme_attach_controller" 00:23:28.602 },{ 00:23:28.602 "params": { 00:23:28.602 "name": "Nvme6", 00:23:28.602 "trtype": "tcp", 00:23:28.602 "traddr": "10.0.0.2", 00:23:28.602 "adrfam": "ipv4", 00:23:28.602 "trsvcid": "4420", 00:23:28.602 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:28.602 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:28.602 "hdgst": false, 00:23:28.602 "ddgst": false 00:23:28.602 }, 00:23:28.602 "method": "bdev_nvme_attach_controller" 00:23:28.602 },{ 00:23:28.602 "params": { 00:23:28.602 "name": "Nvme7", 00:23:28.602 "trtype": "tcp", 00:23:28.602 "traddr": "10.0.0.2", 00:23:28.602 "adrfam": "ipv4", 00:23:28.602 "trsvcid": "4420", 00:23:28.602 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:28.602 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:28.602 "hdgst": false, 00:23:28.602 "ddgst": false 00:23:28.602 }, 00:23:28.602 "method": "bdev_nvme_attach_controller" 00:23:28.602 },{ 00:23:28.602 "params": { 00:23:28.602 "name": "Nvme8", 00:23:28.602 "trtype": "tcp", 00:23:28.602 "traddr": "10.0.0.2", 00:23:28.602 "adrfam": "ipv4", 00:23:28.602 "trsvcid": "4420", 00:23:28.602 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:28.602 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:28.602 "hdgst": false, 00:23:28.602 "ddgst": false 00:23:28.602 }, 00:23:28.602 "method": "bdev_nvme_attach_controller" 00:23:28.602 },{ 00:23:28.602 "params": { 00:23:28.602 "name": "Nvme9", 00:23:28.602 "trtype": "tcp", 00:23:28.602 "traddr": "10.0.0.2", 00:23:28.602 "adrfam": "ipv4", 00:23:28.602 "trsvcid": "4420", 00:23:28.602 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:28.602 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:28.602 "hdgst": false, 00:23:28.602 "ddgst": false 00:23:28.602 }, 00:23:28.602 "method": "bdev_nvme_attach_controller" 00:23:28.602 },{ 00:23:28.602 "params": { 00:23:28.602 "name": "Nvme10", 00:23:28.602 "trtype": "tcp", 00:23:28.602 "traddr": "10.0.0.2", 00:23:28.602 "adrfam": "ipv4", 00:23:28.602 "trsvcid": "4420", 00:23:28.602 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:28.602 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:28.602 "hdgst": false, 00:23:28.602 "ddgst": false 00:23:28.602 }, 00:23:28.602 "method": "bdev_nvme_attach_controller" 00:23:28.602 }' 00:23:28.602 [2024-07-26 11:11:47.945844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.602 [2024-07-26 11:11:48.021263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.983 Running I/O for 10 seconds... 00:23:30.243 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:30.243 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:23:30.243 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:30.243 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.243 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:30.243 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.243 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:30.243 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:30.243 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:30.243 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:30.243 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:23:30.243 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:23:30.243 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:30.243 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:30.243 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:30.243 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.243 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:30.243 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.243 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:30.243 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:30.243 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:30.243 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:30.503 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:30.503 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:30.503 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:30.503 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:30.503 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.503 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:30.503 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.764 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:30.764 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:30.764 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:30.764 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:30.764 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:31.037 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:31.037 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:31.037 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.037 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:31.037 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.037 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:31.037 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:31.037 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:23:31.037 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:23:31.037 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:23:31.037 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1517697 00:23:31.037 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1517697 ']' 00:23:31.037 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1517697 00:23:31.037 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:23:31.037 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:31.038 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1517697 00:23:31.038 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:31.038 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:31.038 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1517697' 00:23:31.038 killing process with pid 1517697 00:23:31.038 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 1517697 00:23:31.038 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 1517697 00:23:31.038 [2024-07-26 11:11:50.350667] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4180 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.351549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae6300 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352467] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352480] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352487] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352493] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352505] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352510] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352516] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352522] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with t[2024-07-26 11:11:50.352517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nshe state(5) to be set 00:23:31.038 id:0 cdw10:00000000 cdw11:00000000 00:23:31.038 [2024-07-26 11:11:50.352540] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352548] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-26 11:11:50.352554] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.038 he state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.038 [2024-07-26 11:11:50.352570] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.038 [2024-07-26 11:11:50.352577] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.038 [2024-07-26 11:11:50.352584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.038 [2024-07-26 11:11:50.352591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-07-26 11:11:50.352599] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with tid:0 cdw10:00000000 cdw11:00000000 00:23:31.038 he state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-26 11:11:50.352608] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.038 he state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352618] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with t[2024-07-26 11:11:50.352618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2210c70 is same he state(5) to be set 00:23:31.038 with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352627] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352635] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352641] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352652] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352664] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352670] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352676] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352697] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352718] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352730] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352743] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352755] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352775] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352781] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352787] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352793] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352799] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352805] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352810] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352822] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352828] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352834] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352840] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352852] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352857] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.352865] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4640 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.354358] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.354383] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.038 [2024-07-26 11:11:50.354390] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354396] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354410] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354423] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354428] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354434] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354441] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354447] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354453] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354459] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354465] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354471] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354477] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354483] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354500] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354512] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354528] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354538] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354544] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354550] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354556] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354562] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354593] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354599] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354605] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354611] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354616] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354622] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354635] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354641] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354647] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354653] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354664] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354670] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354676] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354694] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354700] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354709] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354716] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354722] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354734] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354747] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354752] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.354758] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4b00 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.355713] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4fe0 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.355736] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4fe0 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.355743] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4fe0 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.355749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4fe0 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.355755] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4fe0 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.355761] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4fe0 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.355767] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4fe0 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.355773] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4fe0 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.355778] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4fe0 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.355784] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4fe0 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.355790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4fe0 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.355795] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4fe0 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.355801] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4fe0 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.355808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4fe0 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.355814] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae4fe0 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.356177] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.356194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.356201] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.356211] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.356217] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.356223] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.356229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.039 [2024-07-26 11:11:50.356235] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356241] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356248] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356282] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356306] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356311] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356317] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356323] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356341] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356348] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356354] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356366] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356373] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356379] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356385] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356392] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356399] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356406] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356412] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356424] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356431] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356436] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356461] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356467] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356473] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356479] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356490] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356503] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356520] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356526] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356539] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356545] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356551] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356557] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.356568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae54c0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.357397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd1230 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.358293] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.358306] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.358313] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.358319] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.358325] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.358331] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.358338] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.358344] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.358350] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.358356] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.358362] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.358367] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.358373] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.358379] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.358385] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.358391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.358397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.358406] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.358412] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.358418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.358424] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.358432] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.358438] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.358445] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.358450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.358456] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.040 [2024-07-26 11:11:50.358462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.358468] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.358474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.358480] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.358485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.358491] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.358497] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.358503] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.358509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.358515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.358520] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.358526] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.358531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.358538] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.358543] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.358549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.358555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.358560] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.358566] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.358572] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.358577] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.358583] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.358590] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.358597] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.358603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.358609] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.358615] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.358621] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.358627] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.358633] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.358638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.358644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.358650] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.358656] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.358662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.358668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.358673] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd16f0 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359511] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359534] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359552] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359564] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359570] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359579] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359586] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359597] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359616] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359622] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359635] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359641] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359647] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359653] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359665] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359670] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359694] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359700] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359705] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359718] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359729] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359735] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359740] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359748] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359754] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359760] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359766] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359772] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359778] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.041 [2024-07-26 11:11:50.359783] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.359789] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.359794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.359805] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.359810] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.359816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.359822] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.359827] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.359833] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.359839] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.359844] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.359850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.359855] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.359861] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.359866] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.359872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.359878] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5980 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360448] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360461] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360467] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360482] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360500] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360511] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360522] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360528] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360534] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360540] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360551] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360557] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360562] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360573] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360579] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360585] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360590] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360604] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360616] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360621] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360627] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360633] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360640] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360652] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360664] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360669] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360681] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360687] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360693] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360699] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360715] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360721] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360726] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360732] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360744] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.042 [2024-07-26 11:11:50.360756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.043 [2024-07-26 11:11:50.360761] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.043 [2024-07-26 11:11:50.360767] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.043 [2024-07-26 11:11:50.360772] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.043 [2024-07-26 11:11:50.360778] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.043 [2024-07-26 11:11:50.360784] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.043 [2024-07-26 11:11:50.360790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.043 [2024-07-26 11:11:50.360795] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.043 [2024-07-26 11:11:50.360802] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.043 [2024-07-26 11:11:50.360808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.043 [2024-07-26 11:11:50.360814] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.043 [2024-07-26 11:11:50.360819] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae5e40 is same with the state(5) to be set 00:23:31.043 [2024-07-26 11:11:50.366065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.043 [2024-07-26 11:11:50.366091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.043 [2024-07-26 11:11:50.366107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.043 [2024-07-26 11:11:50.366114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.043 [2024-07-26 11:11:50.366123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.043 [2024-07-26 11:11:50.366130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.043 [2024-07-26 11:11:50.366138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.043 [2024-07-26 11:11:50.366145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.043 [2024-07-26 11:11:50.366153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.043 [2024-07-26 11:11:50.366160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.043 [2024-07-26 11:11:50.366168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.043 [2024-07-26 11:11:50.366174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.043 [2024-07-26 11:11:50.366182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.043 [2024-07-26 11:11:50.366188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.043 [2024-07-26 11:11:50.366196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.043 [2024-07-26 11:11:50.366203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.043 [2024-07-26 11:11:50.366210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.043 [2024-07-26 11:11:50.366217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.043 [2024-07-26 11:11:50.366225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.043 [2024-07-26 11:11:50.366231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.043 [2024-07-26 11:11:50.366239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.043 [2024-07-26 11:11:50.366249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.043 [2024-07-26 11:11:50.366257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.043 [2024-07-26 11:11:50.366263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.043 [2024-07-26 11:11:50.366271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.043 [2024-07-26 11:11:50.366278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.043 [2024-07-26 11:11:50.366286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.043 [2024-07-26 11:11:50.366292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.043 [2024-07-26 11:11:50.366300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.043 [2024-07-26 11:11:50.366306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.043 [2024-07-26 11:11:50.366314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.043 [2024-07-26 11:11:50.366321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.043 [2024-07-26 11:11:50.366329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.043 [2024-07-26 11:11:50.366336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.043 [2024-07-26 11:11:50.366344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.043 [2024-07-26 11:11:50.366351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.043 [2024-07-26 11:11:50.366359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.043 [2024-07-26 11:11:50.366366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.043 [2024-07-26 11:11:50.366374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.043 [2024-07-26 11:11:50.366380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.043 [2024-07-26 11:11:50.366388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.043 [2024-07-26 11:11:50.366395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.043 [2024-07-26 11:11:50.366403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.043 [2024-07-26 11:11:50.366409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.043 [2024-07-26 11:11:50.366417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.043 [2024-07-26 11:11:50.366424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.043 [2024-07-26 11:11:50.366434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.043 [2024-07-26 11:11:50.366440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.043 [2024-07-26 11:11:50.366449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.043 [2024-07-26 11:11:50.366456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.043 [2024-07-26 11:11:50.366464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.043 [2024-07-26 11:11:50.366470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.043 [2024-07-26 11:11:50.366478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.043 [2024-07-26 11:11:50.366485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.043 [2024-07-26 11:11:50.366493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.043 [2024-07-26 11:11:50.366499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.043 [2024-07-26 11:11:50.366507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.043 [2024-07-26 11:11:50.366514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.043 [2024-07-26 11:11:50.366522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.043 [2024-07-26 11:11:50.366528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.043 [2024-07-26 11:11:50.366536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.043 [2024-07-26 11:11:50.366543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.043 [2024-07-26 11:11:50.366551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.043 [2024-07-26 11:11:50.366557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.043 [2024-07-26 11:11:50.366565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.043 [2024-07-26 11:11:50.366572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.043 [2024-07-26 11:11:50.366580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.043 [2024-07-26 11:11:50.366587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.044 [2024-07-26 11:11:50.366594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.044 [2024-07-26 11:11:50.366601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.044 [2024-07-26 11:11:50.366608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.044 [2024-07-26 11:11:50.366616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.044 [2024-07-26 11:11:50.366624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.044 [2024-07-26 11:11:50.366631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.044 [2024-07-26 11:11:50.366639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.044 [2024-07-26 11:11:50.366645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.044 [2024-07-26 11:11:50.366654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.044 [2024-07-26 11:11:50.366660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.044 [2024-07-26 11:11:50.366668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.044 [2024-07-26 11:11:50.366674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.044 [2024-07-26 11:11:50.366682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.044 [2024-07-26 11:11:50.366688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.044 [2024-07-26 11:11:50.366696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.044 [2024-07-26 11:11:50.366702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.044 [2024-07-26 11:11:50.366710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.044 [2024-07-26 11:11:50.366717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.044 [2024-07-26 11:11:50.366725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.044 [2024-07-26 11:11:50.366732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.044 [2024-07-26 11:11:50.366740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.044 [2024-07-26 11:11:50.366746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.044 [2024-07-26 11:11:50.366755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.044 [2024-07-26 11:11:50.366761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.044 [2024-07-26 11:11:50.366769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.044 [2024-07-26 11:11:50.366775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.044 [2024-07-26 11:11:50.366783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.044 [2024-07-26 11:11:50.366789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.044 [2024-07-26 11:11:50.366799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.044 [2024-07-26 11:11:50.366805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.044 [2024-07-26 11:11:50.366813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.044 [2024-07-26 11:11:50.366819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.044 [2024-07-26 11:11:50.366827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.044 [2024-07-26 11:11:50.366833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.044 [2024-07-26 11:11:50.366841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.044 [2024-07-26 11:11:50.366848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.044 [2024-07-26 11:11:50.366856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.044 [2024-07-26 11:11:50.366862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.044 [2024-07-26 11:11:50.366870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.044 [2024-07-26 11:11:50.366876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.044 [2024-07-26 11:11:50.366884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.044 [2024-07-26 11:11:50.366890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.044 [2024-07-26 11:11:50.366898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.044 [2024-07-26 11:11:50.366904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.044 [2024-07-26 11:11:50.366914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.044 [2024-07-26 11:11:50.366920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.044 [2024-07-26 11:11:50.366928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.044 [2024-07-26 11:11:50.366934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.044 [2024-07-26 11:11:50.366942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.044 [2024-07-26 11:11:50.366949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.044 [2024-07-26 11:11:50.366957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.044 [2024-07-26 11:11:50.366963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.044 [2024-07-26 11:11:50.366971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.044 [2024-07-26 11:11:50.366979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.044 [2024-07-26 11:11:50.366987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.044 [2024-07-26 11:11:50.366993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.044 [2024-07-26 11:11:50.367001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.044 [2024-07-26 11:11:50.367007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.044 [2024-07-26 11:11:50.367015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.044 [2024-07-26 11:11:50.367021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.044 [2024-07-26 11:11:50.367053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:31.044 [2024-07-26 11:11:50.367105] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x220cc50 was disconnected and freed. reset controller. 00:23:31.044 [2024-07-26 11:11:50.367313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.044 [2024-07-26 11:11:50.367326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.044 [2024-07-26 11:11:50.367338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.044 [2024-07-26 11:11:50.367345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.044 [2024-07-26 11:11:50.367353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.044 [2024-07-26 11:11:50.367360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.044 [2024-07-26 11:11:50.367369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.044 [2024-07-26 11:11:50.367375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.044 [2024-07-26 11:11:50.367383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.044 [2024-07-26 11:11:50.367389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.044 [2024-07-26 11:11:50.367398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.044 [2024-07-26 11:11:50.367404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.044 [2024-07-26 11:11:50.367412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.044 [2024-07-26 11:11:50.367418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.044 [2024-07-26 11:11:50.367427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.044 [2024-07-26 11:11:50.367434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.044 [2024-07-26 11:11:50.367444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.045 [2024-07-26 11:11:50.367451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.045 [2024-07-26 11:11:50.367460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.045 [2024-07-26 11:11:50.367466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.045 [2024-07-26 11:11:50.367474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.045 [2024-07-26 11:11:50.367481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.045 [2024-07-26 11:11:50.367489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.045 [2024-07-26 11:11:50.367495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.045 [2024-07-26 11:11:50.367503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.045 [2024-07-26 11:11:50.367509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.045 [2024-07-26 11:11:50.367517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.045 [2024-07-26 11:11:50.367524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.045 [2024-07-26 11:11:50.367532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.045 [2024-07-26 11:11:50.367538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.045 [2024-07-26 11:11:50.367546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.045 [2024-07-26 11:11:50.367552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.045 [2024-07-26 11:11:50.367560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.045 [2024-07-26 11:11:50.367566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.045 [2024-07-26 11:11:50.367574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.045 [2024-07-26 11:11:50.367580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.045 [2024-07-26 11:11:50.367588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.045 [2024-07-26 11:11:50.367595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.045 [2024-07-26 11:11:50.367603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.045 [2024-07-26 11:11:50.367610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.045 [2024-07-26 11:11:50.367617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.045 [2024-07-26 11:11:50.367625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.045 [2024-07-26 11:11:50.367633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.045 [2024-07-26 11:11:50.367640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.045 [2024-07-26 11:11:50.367647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.045 [2024-07-26 11:11:50.367654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.045 [2024-07-26 11:11:50.367662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.045 [2024-07-26 11:11:50.367669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.045 [2024-07-26 11:11:50.367677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.045 [2024-07-26 11:11:50.367683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.045 [2024-07-26 11:11:50.367691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.045 [2024-07-26 11:11:50.367697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.045 [2024-07-26 11:11:50.367705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.045 [2024-07-26 11:11:50.367711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.045 [2024-07-26 11:11:50.367719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.045 [2024-07-26 11:11:50.367725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.045 [2024-07-26 11:11:50.367733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.045 [2024-07-26 11:11:50.367740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.045 [2024-07-26 11:11:50.367748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.045 [2024-07-26 11:11:50.367754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.045 [2024-07-26 11:11:50.367761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.045 [2024-07-26 11:11:50.367768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.045 [2024-07-26 11:11:50.367777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.045 [2024-07-26 11:11:50.367783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.045 [2024-07-26 11:11:50.367791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.045 [2024-07-26 11:11:50.367797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.045 [2024-07-26 11:11:50.367807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.045 [2024-07-26 11:11:50.367813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.045 [2024-07-26 11:11:50.367821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.045 [2024-07-26 11:11:50.367827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.045 [2024-07-26 11:11:50.367835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.045 [2024-07-26 11:11:50.367841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.045 [2024-07-26 11:11:50.367849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.045 [2024-07-26 11:11:50.367855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.045 [2024-07-26 11:11:50.367863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.045 [2024-07-26 11:11:50.367869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.045 [2024-07-26 11:11:50.367877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.045 [2024-07-26 11:11:50.367883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.045 [2024-07-26 11:11:50.367891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.045 [2024-07-26 11:11:50.367897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.045 [2024-07-26 11:11:50.367905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.045 [2024-07-26 11:11:50.367911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.045 [2024-07-26 11:11:50.367919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.045 [2024-07-26 11:11:50.367925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.045 [2024-07-26 11:11:50.367933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.045 [2024-07-26 11:11:50.367940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.045 [2024-07-26 11:11:50.367947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.045 [2024-07-26 11:11:50.367953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.045 [2024-07-26 11:11:50.367961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.045 [2024-07-26 11:11:50.367967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.045 [2024-07-26 11:11:50.367975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.045 [2024-07-26 11:11:50.367983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.045 [2024-07-26 11:11:50.367991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.045 [2024-07-26 11:11:50.367997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.045 [2024-07-26 11:11:50.368005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.045 [2024-07-26 11:11:50.368012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.046 [2024-07-26 11:11:50.368019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.046 [2024-07-26 11:11:50.368025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.046 [2024-07-26 11:11:50.368033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.046 [2024-07-26 11:11:50.368039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.046 [2024-07-26 11:11:50.368053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.046 [2024-07-26 11:11:50.368059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.046 [2024-07-26 11:11:50.368067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.046 [2024-07-26 11:11:50.368073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.046 [2024-07-26 11:11:50.368081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.046 [2024-07-26 11:11:50.368088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.046 [2024-07-26 11:11:50.368095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.046 [2024-07-26 11:11:50.368101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.046 [2024-07-26 11:11:50.368109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.046 [2024-07-26 11:11:50.368115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.046 [2024-07-26 11:11:50.368123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.046 [2024-07-26 11:11:50.368129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.046 [2024-07-26 11:11:50.368137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.046 [2024-07-26 11:11:50.368144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.046 [2024-07-26 11:11:50.368151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.046 [2024-07-26 11:11:50.368158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.046 [2024-07-26 11:11:50.368165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.046 [2024-07-26 11:11:50.368179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.046 [2024-07-26 11:11:50.368188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.046 [2024-07-26 11:11:50.368194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.046 [2024-07-26 11:11:50.368202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.046 [2024-07-26 11:11:50.368208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.046 [2024-07-26 11:11:50.368216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.046 [2024-07-26 11:11:50.368222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.046 [2024-07-26 11:11:50.368230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.046 [2024-07-26 11:11:50.368237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.046 [2024-07-26 11:11:50.368245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.046 [2024-07-26 11:11:50.368251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.046 [2024-07-26 11:11:50.368636] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22cbc30 was disconnected and freed. reset controller. 00:23:31.046 [2024-07-26 11:11:50.368699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.046 [2024-07-26 11:11:50.368708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.046 [2024-07-26 11:11:50.368715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.046 [2024-07-26 11:11:50.368722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.046 [2024-07-26 11:11:50.368729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.046 [2024-07-26 11:11:50.368735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.046 [2024-07-26 11:11:50.368742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.046 [2024-07-26 11:11:50.368748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.046 [2024-07-26 11:11:50.368755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cc3e0 is same with the state(5) to be set 00:23:31.046 [2024-07-26 11:11:50.368781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.046 [2024-07-26 11:11:50.368789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.046 [2024-07-26 11:11:50.368796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.046 [2024-07-26 11:11:50.368802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.046 [2024-07-26 11:11:50.368811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.046 [2024-07-26 11:11:50.368817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.046 [2024-07-26 11:11:50.368824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.046 [2024-07-26 11:11:50.368831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.046 [2024-07-26 11:11:50.368836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c5ea0 is same with the state(5) to be set 00:23:31.046 [2024-07-26 11:11:50.368860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.046 [2024-07-26 11:11:50.368867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.046 [2024-07-26 11:11:50.378007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.046 [2024-07-26 11:11:50.378027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.046 [2024-07-26 11:11:50.378036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.046 [2024-07-26 11:11:50.378049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.046 [2024-07-26 11:11:50.378059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.046 [2024-07-26 11:11:50.378068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.046 [2024-07-26 11:11:50.378077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c6270 is same with the state(5) to be set 00:23:31.046 [2024-07-26 11:11:50.378106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.046 [2024-07-26 11:11:50.378118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.046 [2024-07-26 11:11:50.378127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.046 [2024-07-26 11:11:50.378136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.046 [2024-07-26 11:11:50.378145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.046 [2024-07-26 11:11:50.378154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.046 [2024-07-26 11:11:50.378163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.046 [2024-07-26 11:11:50.378172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.046 [2024-07-26 11:11:50.378180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22335e0 is same with the state(5) to be set 00:23:31.047 [2024-07-26 11:11:50.378209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.047 [2024-07-26 11:11:50.378220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.047 [2024-07-26 11:11:50.378234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.047 [2024-07-26 11:11:50.378242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.047 [2024-07-26 11:11:50.378252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.047 [2024-07-26 11:11:50.378261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.047 [2024-07-26 11:11:50.378270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.047 [2024-07-26 11:11:50.378278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.047 [2024-07-26 11:11:50.378287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba880 is same with the state(5) to be set 00:23:31.047 [2024-07-26 11:11:50.378318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.047 [2024-07-26 11:11:50.378328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.047 [2024-07-26 11:11:50.378338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.047 [2024-07-26 11:11:50.378346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.047 [2024-07-26 11:11:50.378355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.047 [2024-07-26 11:11:50.378364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.047 [2024-07-26 11:11:50.378373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.047 [2024-07-26 11:11:50.378382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.047 [2024-07-26 11:11:50.378390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc910 is same with the state(5) to be set 00:23:31.047 [2024-07-26 11:11:50.378417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.047 [2024-07-26 11:11:50.378428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.047 [2024-07-26 11:11:50.378437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.047 [2024-07-26 11:11:50.378446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.047 [2024-07-26 11:11:50.378455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.047 [2024-07-26 11:11:50.378464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.047 [2024-07-26 11:11:50.378473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.047 [2024-07-26 11:11:50.378482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.047 [2024-07-26 11:11:50.378490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5f340 is same with the state(5) to be set 00:23:31.047 [2024-07-26 11:11:50.378513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2210c70 (9): Bad file descriptor 00:23:31.047 [2024-07-26 11:11:50.378547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.047 [2024-07-26 11:11:50.378558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.047 [2024-07-26 11:11:50.378567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.047 [2024-07-26 11:11:50.378576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.047 [2024-07-26 11:11:50.378585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.047 [2024-07-26 11:11:50.378594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.047 [2024-07-26 11:11:50.378603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.047 [2024-07-26 11:11:50.378612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.047 [2024-07-26 11:11:50.378620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c9e0 is same with the state(5) to be set 00:23:31.047 [2024-07-26 11:11:50.378649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.047 [2024-07-26 11:11:50.378659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.047 [2024-07-26 11:11:50.378669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.047 [2024-07-26 11:11:50.378677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.047 [2024-07-26 11:11:50.378687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.047 [2024-07-26 11:11:50.378696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.047 [2024-07-26 11:11:50.378706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.047 [2024-07-26 11:11:50.378714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.047 [2024-07-26 11:11:50.378723] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba6a0 is same with the state(5) to be set 00:23:31.047 [2024-07-26 11:11:50.378814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.047 [2024-07-26 11:11:50.378825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.047 [2024-07-26 11:11:50.378840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.047 [2024-07-26 11:11:50.378849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.047 [2024-07-26 11:11:50.378861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.047 [2024-07-26 11:11:50.378870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.047 [2024-07-26 11:11:50.378881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.047 [2024-07-26 11:11:50.378892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.047 [2024-07-26 11:11:50.378903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.047 [2024-07-26 11:11:50.378912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.047 [2024-07-26 11:11:50.378923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.047 [2024-07-26 11:11:50.378932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.047 [2024-07-26 11:11:50.378943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.047 [2024-07-26 11:11:50.378951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.047 [2024-07-26 11:11:50.378962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.047 [2024-07-26 11:11:50.378971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.047 [2024-07-26 11:11:50.378982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.047 [2024-07-26 11:11:50.378990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.047 [2024-07-26 11:11:50.379001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.047 [2024-07-26 11:11:50.379010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.047 [2024-07-26 11:11:50.379021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.047 [2024-07-26 11:11:50.379030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.047 [2024-07-26 11:11:50.379041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.047 [2024-07-26 11:11:50.379057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.047 [2024-07-26 11:11:50.379068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.047 [2024-07-26 11:11:50.379077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.047 [2024-07-26 11:11:50.379088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.047 [2024-07-26 11:11:50.379097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.047 [2024-07-26 11:11:50.379108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.047 [2024-07-26 11:11:50.379117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.047 [2024-07-26 11:11:50.379128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.047 [2024-07-26 11:11:50.379136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.048 [2024-07-26 11:11:50.379150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.048 [2024-07-26 11:11:50.379159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.048 [2024-07-26 11:11:50.379169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.048 [2024-07-26 11:11:50.379178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.048 [2024-07-26 11:11:50.379190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.048 [2024-07-26 11:11:50.379198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.048 [2024-07-26 11:11:50.379209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.048 [2024-07-26 11:11:50.379217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.048 [2024-07-26 11:11:50.379228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.048 [2024-07-26 11:11:50.379237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.048 [2024-07-26 11:11:50.379248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.048 [2024-07-26 11:11:50.379257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.048 [2024-07-26 11:11:50.379268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.048 [2024-07-26 11:11:50.379277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.048 [2024-07-26 11:11:50.379288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.048 [2024-07-26 11:11:50.379297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.048 [2024-07-26 11:11:50.379308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.048 [2024-07-26 11:11:50.379317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.048 [2024-07-26 11:11:50.379328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.048 [2024-07-26 11:11:50.379337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.048 [2024-07-26 11:11:50.379347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.048 [2024-07-26 11:11:50.379356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.048 [2024-07-26 11:11:50.379367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.048 [2024-07-26 11:11:50.379376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.048 [2024-07-26 11:11:50.379386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.048 [2024-07-26 11:11:50.379397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.048 [2024-07-26 11:11:50.379408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.048 [2024-07-26 11:11:50.379418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.048 [2024-07-26 11:11:50.379429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.048 [2024-07-26 11:11:50.379438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.048 [2024-07-26 11:11:50.379449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.048 [2024-07-26 11:11:50.379457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.048 [2024-07-26 11:11:50.379468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.048 [2024-07-26 11:11:50.379477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.048 [2024-07-26 11:11:50.379488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.048 [2024-07-26 11:11:50.379496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.048 [2024-07-26 11:11:50.379507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.048 [2024-07-26 11:11:50.379518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.048 [2024-07-26 11:11:50.379529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.048 [2024-07-26 11:11:50.379538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.048 [2024-07-26 11:11:50.379550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.048 [2024-07-26 11:11:50.379559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.048 [2024-07-26 11:11:50.379570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.048 [2024-07-26 11:11:50.379579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.048 [2024-07-26 11:11:50.379590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.048 [2024-07-26 11:11:50.379599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.048 [2024-07-26 11:11:50.379610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.048 [2024-07-26 11:11:50.379619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.048 [2024-07-26 11:11:50.379630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.048 [2024-07-26 11:11:50.379639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.048 [2024-07-26 11:11:50.379652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.048 [2024-07-26 11:11:50.379662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.048 [2024-07-26 11:11:50.379672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.048 [2024-07-26 11:11:50.379681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.048 [2024-07-26 11:11:50.379692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.048 [2024-07-26 11:11:50.379701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.048 [2024-07-26 11:11:50.379711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.048 [2024-07-26 11:11:50.379720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.048 [2024-07-26 11:11:50.379731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.048 [2024-07-26 11:11:50.379739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.048 [2024-07-26 11:11:50.379750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.048 [2024-07-26 11:11:50.379759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.048 [2024-07-26 11:11:50.379770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.048 [2024-07-26 11:11:50.379779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.048 [2024-07-26 11:11:50.379790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.048 [2024-07-26 11:11:50.379799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.048 [2024-07-26 11:11:50.379810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.048 [2024-07-26 11:11:50.379819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.048 [2024-07-26 11:11:50.379829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.048 [2024-07-26 11:11:50.379838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.048 [2024-07-26 11:11:50.379849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.048 [2024-07-26 11:11:50.379857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.048 [2024-07-26 11:11:50.379868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.048 [2024-07-26 11:11:50.379877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.048 [2024-07-26 11:11:50.379888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.048 [2024-07-26 11:11:50.379899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.048 [2024-07-26 11:11:50.379910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.048 [2024-07-26 11:11:50.379920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.049 [2024-07-26 11:11:50.379931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.049 [2024-07-26 11:11:50.379940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.049 [2024-07-26 11:11:50.379951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.049 [2024-07-26 11:11:50.379960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.049 [2024-07-26 11:11:50.379971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.049 [2024-07-26 11:11:50.379980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.049 [2024-07-26 11:11:50.379991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.049 [2024-07-26 11:11:50.379999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.049 [2024-07-26 11:11:50.380010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.049 [2024-07-26 11:11:50.380019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.049 [2024-07-26 11:11:50.380030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.049 [2024-07-26 11:11:50.380039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.049 [2024-07-26 11:11:50.380054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.049 [2024-07-26 11:11:50.380064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.049 [2024-07-26 11:11:50.380074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.049 [2024-07-26 11:11:50.380083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.049 [2024-07-26 11:11:50.380094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.049 [2024-07-26 11:11:50.380103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.049 [2024-07-26 11:11:50.380172] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2378460 was disconnected and freed. reset controller. 00:23:31.049 [2024-07-26 11:11:50.384056] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:31.049 [2024-07-26 11:11:50.384104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ba6a0 (9): Bad file descriptor 00:23:31.049 [2024-07-26 11:11:50.384150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23cc3e0 (9): Bad file descriptor 00:23:31.049 [2024-07-26 11:11:50.384166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c5ea0 (9): Bad file descriptor 00:23:31.049 [2024-07-26 11:11:50.384187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c6270 (9): Bad file descriptor 00:23:31.049 [2024-07-26 11:11:50.384202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22335e0 (9): Bad file descriptor 00:23:31.049 [2024-07-26 11:11:50.384221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ba880 (9): Bad file descriptor 00:23:31.049 [2024-07-26 11:11:50.384236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23dc910 (9): Bad file descriptor 00:23:31.049 [2024-07-26 11:11:50.384253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d5f340 (9): Bad file descriptor 00:23:31.049 [2024-07-26 11:11:50.384274] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:31.049 [2024-07-26 11:11:50.384288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223c9e0 (9): Bad file descriptor 00:23:31.049 [2024-07-26 11:11:50.385667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.049 [2024-07-26 11:11:50.385689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.049 [2024-07-26 11:11:50.385705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.049 [2024-07-26 11:11:50.385713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.049 [2024-07-26 11:11:50.385725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.049 [2024-07-26 11:11:50.385734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.049 [2024-07-26 11:11:50.385744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.049 [2024-07-26 11:11:50.385753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.049 [2024-07-26 11:11:50.385765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.049 [2024-07-26 11:11:50.385773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.049 [2024-07-26 11:11:50.385784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.049 [2024-07-26 11:11:50.385792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.049 [2024-07-26 11:11:50.385803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.049 [2024-07-26 11:11:50.385812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.049 [2024-07-26 11:11:50.385823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.049 [2024-07-26 11:11:50.385831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.049 [2024-07-26 11:11:50.385842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.049 [2024-07-26 11:11:50.385851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.049 [2024-07-26 11:11:50.385862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.049 [2024-07-26 11:11:50.385874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.049 [2024-07-26 11:11:50.385885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.049 [2024-07-26 11:11:50.385894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.049 [2024-07-26 11:11:50.385905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.049 [2024-07-26 11:11:50.385913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.049 [2024-07-26 11:11:50.385924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.049 [2024-07-26 11:11:50.385933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.049 [2024-07-26 11:11:50.385943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.049 [2024-07-26 11:11:50.385952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.049 [2024-07-26 11:11:50.385963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.049 [2024-07-26 11:11:50.385972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.049 [2024-07-26 11:11:50.385982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.049 [2024-07-26 11:11:50.385991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.049 [2024-07-26 11:11:50.386002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.049 [2024-07-26 11:11:50.386010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.049 [2024-07-26 11:11:50.386022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.049 [2024-07-26 11:11:50.386030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.049 [2024-07-26 11:11:50.386041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.049 [2024-07-26 11:11:50.386056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.049 [2024-07-26 11:11:50.386067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.049 [2024-07-26 11:11:50.386076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.049 [2024-07-26 11:11:50.386086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.049 [2024-07-26 11:11:50.386095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.049 [2024-07-26 11:11:50.386106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.049 [2024-07-26 11:11:50.386115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.049 [2024-07-26 11:11:50.386129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.049 [2024-07-26 11:11:50.386138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.049 [2024-07-26 11:11:50.386148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.049 [2024-07-26 11:11:50.386157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.049 [2024-07-26 11:11:50.386168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.049 [2024-07-26 11:11:50.386177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.050 [2024-07-26 11:11:50.386188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.050 [2024-07-26 11:11:50.386196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.050 [2024-07-26 11:11:50.386207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.050 [2024-07-26 11:11:50.386215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.050 [2024-07-26 11:11:50.386227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.050 [2024-07-26 11:11:50.386236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.050 [2024-07-26 11:11:50.386247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.050 [2024-07-26 11:11:50.386255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.050 [2024-07-26 11:11:50.386266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.050 [2024-07-26 11:11:50.386275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.050 [2024-07-26 11:11:50.386286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.050 [2024-07-26 11:11:50.386294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.050 [2024-07-26 11:11:50.386305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.050 [2024-07-26 11:11:50.386314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.050 [2024-07-26 11:11:50.386324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.050 [2024-07-26 11:11:50.386333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.050 [2024-07-26 11:11:50.386344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.050 [2024-07-26 11:11:50.386352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.050 [2024-07-26 11:11:50.386363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.050 [2024-07-26 11:11:50.386373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.050 [2024-07-26 11:11:50.386384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.050 [2024-07-26 11:11:50.386393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.050 [2024-07-26 11:11:50.386403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.050 [2024-07-26 11:11:50.386412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.050 [2024-07-26 11:11:50.386423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.050 [2024-07-26 11:11:50.386432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.050 [2024-07-26 11:11:50.386442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.050 [2024-07-26 11:11:50.386451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.050 [2024-07-26 11:11:50.386462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.050 [2024-07-26 11:11:50.386471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.050 [2024-07-26 11:11:50.386481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.050 [2024-07-26 11:11:50.386490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.050 [2024-07-26 11:11:50.386512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.050 [2024-07-26 11:11:50.386520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.050 [2024-07-26 11:11:50.386529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.050 [2024-07-26 11:11:50.386537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.050 [2024-07-26 11:11:50.386547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.050 [2024-07-26 11:11:50.386554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.050 [2024-07-26 11:11:50.386563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.050 [2024-07-26 11:11:50.386571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.050 [2024-07-26 11:11:50.386580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.050 [2024-07-26 11:11:50.386588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.050 [2024-07-26 11:11:50.386597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.050 [2024-07-26 11:11:50.386604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.050 [2024-07-26 11:11:50.386615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.050 [2024-07-26 11:11:50.386623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.050 [2024-07-26 11:11:50.386632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.050 [2024-07-26 11:11:50.386639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.050 [2024-07-26 11:11:50.386649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.050 [2024-07-26 11:11:50.386656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.050 [2024-07-26 11:11:50.386666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.050 [2024-07-26 11:11:50.386673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.050 [2024-07-26 11:11:50.386682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.050 [2024-07-26 11:11:50.386690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.050 [2024-07-26 11:11:50.386699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.050 [2024-07-26 11:11:50.386707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.050 [2024-07-26 11:11:50.386716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.050 [2024-07-26 11:11:50.386724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.050 [2024-07-26 11:11:50.386733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.050 [2024-07-26 11:11:50.386741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.050 [2024-07-26 11:11:50.386750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.050 [2024-07-26 11:11:50.386757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.050 [2024-07-26 11:11:50.386767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.050 [2024-07-26 11:11:50.386774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.050 [2024-07-26 11:11:50.386783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.050 [2024-07-26 11:11:50.386791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.050 [2024-07-26 11:11:50.386800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.050 [2024-07-26 11:11:50.386808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.050 [2024-07-26 11:11:50.386818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.050 [2024-07-26 11:11:50.386830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.050 [2024-07-26 11:11:50.386839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.050 [2024-07-26 11:11:50.386847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.050 [2024-07-26 11:11:50.386856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.050 [2024-07-26 11:11:50.386864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.050 [2024-07-26 11:11:50.386873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.050 [2024-07-26 11:11:50.386880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.050 [2024-07-26 11:11:50.386890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.050 [2024-07-26 11:11:50.386897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.050 [2024-07-26 11:11:50.386966] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x220a2f0 was disconnected and freed. reset controller. 00:23:31.051 [2024-07-26 11:11:50.387230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:31.051 [2024-07-26 11:11:50.387251] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:31.051 [2024-07-26 11:11:50.389699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.051 [2024-07-26 11:11:50.389728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ba6a0 with addr=10.0.0.2, port=4420 00:23:31.051 [2024-07-26 11:11:50.389738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba6a0 is same with the state(5) to be set 00:23:31.051 [2024-07-26 11:11:50.390235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.051 [2024-07-26 11:11:50.390248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23c6270 with addr=10.0.0.2, port=4420 00:23:31.051 [2024-07-26 11:11:50.390256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c6270 is same with the state(5) to be set 00:23:31.051 [2024-07-26 11:11:50.390645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.051 [2024-07-26 11:11:50.390657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2210c70 with addr=10.0.0.2, port=4420 00:23:31.051 [2024-07-26 11:11:50.390664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2210c70 is same with the state(5) to be set 00:23:31.051 [2024-07-26 11:11:50.391095] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:31.051 [2024-07-26 11:11:50.391161] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:31.051 [2024-07-26 11:11:50.391211] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:31.051 [2024-07-26 11:11:50.391260] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:31.051 [2024-07-26 11:11:50.391282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:31.051 [2024-07-26 11:11:50.391307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ba6a0 (9): Bad file descriptor 00:23:31.051 [2024-07-26 11:11:50.391319] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c6270 (9): Bad file descriptor 00:23:31.051 [2024-07-26 11:11:50.391333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2210c70 (9): Bad file descriptor 00:23:31.051 [2024-07-26 11:11:50.391385] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:31.051 [2024-07-26 11:11:50.391437] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:31.051 [2024-07-26 11:11:50.392010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.051 [2024-07-26 11:11:50.392026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22335e0 with addr=10.0.0.2, port=4420 00:23:31.051 [2024-07-26 11:11:50.392034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22335e0 is same with the state(5) to be set 00:23:31.051 [2024-07-26 11:11:50.392048] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:31.051 [2024-07-26 11:11:50.392056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:31.051 [2024-07-26 11:11:50.392064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:31.051 [2024-07-26 11:11:50.392079] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:31.051 [2024-07-26 11:11:50.392087] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:31.051 [2024-07-26 11:11:50.392093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:31.051 [2024-07-26 11:11:50.392105] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:31.051 [2024-07-26 11:11:50.392113] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:31.051 [2024-07-26 11:11:50.392119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:31.051 [2024-07-26 11:11:50.392200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:31.051 [2024-07-26 11:11:50.392210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:31.051 [2024-07-26 11:11:50.392216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:31.051 [2024-07-26 11:11:50.392225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22335e0 (9): Bad file descriptor 00:23:31.051 [2024-07-26 11:11:50.392263] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:31.051 [2024-07-26 11:11:50.392271] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:31.051 [2024-07-26 11:11:50.392278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:31.051 [2024-07-26 11:11:50.392313] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:31.051 [2024-07-26 11:11:50.394205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.051 [2024-07-26 11:11:50.394221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.051 [2024-07-26 11:11:50.394235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.051 [2024-07-26 11:11:50.394243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.051 [2024-07-26 11:11:50.394253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.051 [2024-07-26 11:11:50.394260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.051 [2024-07-26 11:11:50.394270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.051 [2024-07-26 11:11:50.394281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.051 [2024-07-26 11:11:50.394291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.051 [2024-07-26 11:11:50.394298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.051 [2024-07-26 11:11:50.394307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.051 [2024-07-26 11:11:50.394315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.051 [2024-07-26 11:11:50.394324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.051 [2024-07-26 11:11:50.394332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.051 [2024-07-26 11:11:50.394341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.051 [2024-07-26 11:11:50.394349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.051 [2024-07-26 11:11:50.394358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.051 [2024-07-26 11:11:50.394365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.051 [2024-07-26 11:11:50.394375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.051 [2024-07-26 11:11:50.394383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.051 [2024-07-26 11:11:50.394392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.051 [2024-07-26 11:11:50.394399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.051 [2024-07-26 11:11:50.394408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.051 [2024-07-26 11:11:50.394417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.051 [2024-07-26 11:11:50.394426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.051 [2024-07-26 11:11:50.394433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.051 [2024-07-26 11:11:50.394443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.051 [2024-07-26 11:11:50.394450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.051 [2024-07-26 11:11:50.394459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.051 [2024-07-26 11:11:50.394467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.051 [2024-07-26 11:11:50.394476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.051 [2024-07-26 11:11:50.394483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.051 [2024-07-26 11:11:50.394495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.052 [2024-07-26 11:11:50.394503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.052 [2024-07-26 11:11:50.394512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.052 [2024-07-26 11:11:50.394520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.052 [2024-07-26 11:11:50.394529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.052 [2024-07-26 11:11:50.394536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.052 [2024-07-26 11:11:50.394546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.052 [2024-07-26 11:11:50.394554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.052 [2024-07-26 11:11:50.394563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.052 [2024-07-26 11:11:50.394571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.052 [2024-07-26 11:11:50.394580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.052 [2024-07-26 11:11:50.394587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.052 [2024-07-26 11:11:50.394596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.052 [2024-07-26 11:11:50.394604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.052 [2024-07-26 11:11:50.394613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.052 [2024-07-26 11:11:50.394620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.052 [2024-07-26 11:11:50.394630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.052 [2024-07-26 11:11:50.394637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.052 [2024-07-26 11:11:50.394647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.052 [2024-07-26 11:11:50.394654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.052 [2024-07-26 11:11:50.394663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.052 [2024-07-26 11:11:50.394671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.052 [2024-07-26 11:11:50.394680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.052 [2024-07-26 11:11:50.394688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.052 [2024-07-26 11:11:50.394697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.052 [2024-07-26 11:11:50.394706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.052 [2024-07-26 11:11:50.394716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.052 [2024-07-26 11:11:50.394723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.052 [2024-07-26 11:11:50.394732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.052 [2024-07-26 11:11:50.394742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.052 [2024-07-26 11:11:50.394755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.052 [2024-07-26 11:11:50.394767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.052 [2024-07-26 11:11:50.394778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.052 [2024-07-26 11:11:50.394786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.052 [2024-07-26 11:11:50.394796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.052 [2024-07-26 11:11:50.394804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.052 [2024-07-26 11:11:50.394813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.052 [2024-07-26 11:11:50.394821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.052 [2024-07-26 11:11:50.394830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.052 [2024-07-26 11:11:50.394837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.052 [2024-07-26 11:11:50.394847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.052 [2024-07-26 11:11:50.394854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.052 [2024-07-26 11:11:50.394864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.052 [2024-07-26 11:11:50.394871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.052 [2024-07-26 11:11:50.394881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.052 [2024-07-26 11:11:50.394888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.052 [2024-07-26 11:11:50.394898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.052 [2024-07-26 11:11:50.394906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.052 [2024-07-26 11:11:50.394915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.052 [2024-07-26 11:11:50.394923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.052 [2024-07-26 11:11:50.394934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.052 [2024-07-26 11:11:50.394943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.052 [2024-07-26 11:11:50.394952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.052 [2024-07-26 11:11:50.394959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.052 [2024-07-26 11:11:50.394969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.052 [2024-07-26 11:11:50.394976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.052 [2024-07-26 11:11:50.394986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.052 [2024-07-26 11:11:50.394993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.052 [2024-07-26 11:11:50.395003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.052 [2024-07-26 11:11:50.395010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.052 [2024-07-26 11:11:50.395019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.052 [2024-07-26 11:11:50.395026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.052 [2024-07-26 11:11:50.395036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.052 [2024-07-26 11:11:50.395052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.052 [2024-07-26 11:11:50.395061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.052 [2024-07-26 11:11:50.395069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.052 [2024-07-26 11:11:50.395079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.052 [2024-07-26 11:11:50.395087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.052 [2024-07-26 11:11:50.395096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.052 [2024-07-26 11:11:50.395104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.052 [2024-07-26 11:11:50.395113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.052 [2024-07-26 11:11:50.395121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.052 [2024-07-26 11:11:50.395130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.052 [2024-07-26 11:11:50.395138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.052 [2024-07-26 11:11:50.395147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.052 [2024-07-26 11:11:50.395157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.052 [2024-07-26 11:11:50.395166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.052 [2024-07-26 11:11:50.395174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.052 [2024-07-26 11:11:50.395183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.052 [2024-07-26 11:11:50.395190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.053 [2024-07-26 11:11:50.395199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.053 [2024-07-26 11:11:50.395207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.053 [2024-07-26 11:11:50.395216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.053 [2024-07-26 11:11:50.395224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.053 [2024-07-26 11:11:50.395233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.053 [2024-07-26 11:11:50.395241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.053 [2024-07-26 11:11:50.395250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.053 [2024-07-26 11:11:50.395257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.053 [2024-07-26 11:11:50.395267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.053 [2024-07-26 11:11:50.395274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.053 [2024-07-26 11:11:50.395284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.053 [2024-07-26 11:11:50.395291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.053 [2024-07-26 11:11:50.395301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.053 [2024-07-26 11:11:50.395308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.053 [2024-07-26 11:11:50.395317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.053 [2024-07-26 11:11:50.395324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.053 [2024-07-26 11:11:50.395333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d20c0 is same with the state(5) to be set 00:23:31.053 [2024-07-26 11:11:50.396516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.053 [2024-07-26 11:11:50.396543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.053 [2024-07-26 11:11:50.396554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.053 [2024-07-26 11:11:50.396562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.053 [2024-07-26 11:11:50.396571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.053 [2024-07-26 11:11:50.396577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.053 [2024-07-26 11:11:50.396585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.053 [2024-07-26 11:11:50.396592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.053 [2024-07-26 11:11:50.396600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.053 [2024-07-26 11:11:50.396606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.053 [2024-07-26 11:11:50.396614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.053 [2024-07-26 11:11:50.396621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.053 [2024-07-26 11:11:50.396628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.053 [2024-07-26 11:11:50.396635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.053 [2024-07-26 11:11:50.396643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.053 [2024-07-26 11:11:50.396649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.053 [2024-07-26 11:11:50.396657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.053 [2024-07-26 11:11:50.396663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.053 [2024-07-26 11:11:50.396671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.053 [2024-07-26 11:11:50.396678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.053 [2024-07-26 11:11:50.396686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.053 [2024-07-26 11:11:50.396692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.053 [2024-07-26 11:11:50.396700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.053 [2024-07-26 11:11:50.396706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.053 [2024-07-26 11:11:50.396714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.053 [2024-07-26 11:11:50.396721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.053 [2024-07-26 11:11:50.396728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.053 [2024-07-26 11:11:50.396735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.053 [2024-07-26 11:11:50.396743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.053 [2024-07-26 11:11:50.396751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.053 [2024-07-26 11:11:50.396759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.053 [2024-07-26 11:11:50.396765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.053 [2024-07-26 11:11:50.396773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.053 [2024-07-26 11:11:50.396780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.053 [2024-07-26 11:11:50.396788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.053 [2024-07-26 11:11:50.396794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.053 [2024-07-26 11:11:50.396802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.053 [2024-07-26 11:11:50.396809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.053 [2024-07-26 11:11:50.396816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.053 [2024-07-26 11:11:50.396823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.053 [2024-07-26 11:11:50.396832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.053 [2024-07-26 11:11:50.396838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.053 [2024-07-26 11:11:50.396846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.053 [2024-07-26 11:11:50.396852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.053 [2024-07-26 11:11:50.396860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.053 [2024-07-26 11:11:50.396867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.053 [2024-07-26 11:11:50.396875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.053 [2024-07-26 11:11:50.396881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.053 [2024-07-26 11:11:50.396890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.053 [2024-07-26 11:11:50.396896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.053 [2024-07-26 11:11:50.396905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.053 [2024-07-26 11:11:50.396911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.053 [2024-07-26 11:11:50.396919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.053 [2024-07-26 11:11:50.396926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.053 [2024-07-26 11:11:50.396935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.053 [2024-07-26 11:11:50.396942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.053 [2024-07-26 11:11:50.396950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.053 [2024-07-26 11:11:50.396956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.053 [2024-07-26 11:11:50.396964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.053 [2024-07-26 11:11:50.396970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.053 [2024-07-26 11:11:50.396979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.053 [2024-07-26 11:11:50.396985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.054 [2024-07-26 11:11:50.396993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.054 [2024-07-26 11:11:50.397000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.054 [2024-07-26 11:11:50.397008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.054 [2024-07-26 11:11:50.397014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.054 [2024-07-26 11:11:50.397022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.054 [2024-07-26 11:11:50.397028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.054 [2024-07-26 11:11:50.397036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.054 [2024-07-26 11:11:50.397046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.054 [2024-07-26 11:11:50.397054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.054 [2024-07-26 11:11:50.397060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.054 [2024-07-26 11:11:50.397068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.054 [2024-07-26 11:11:50.397075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.054 [2024-07-26 11:11:50.397083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.054 [2024-07-26 11:11:50.397089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.054 [2024-07-26 11:11:50.397098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.054 [2024-07-26 11:11:50.397104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.054 [2024-07-26 11:11:50.397112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.054 [2024-07-26 11:11:50.397120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.054 [2024-07-26 11:11:50.397128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.054 [2024-07-26 11:11:50.397134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.054 [2024-07-26 11:11:50.397143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.054 [2024-07-26 11:11:50.397149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.054 [2024-07-26 11:11:50.397157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.054 [2024-07-26 11:11:50.397164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.054 [2024-07-26 11:11:50.397172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.054 [2024-07-26 11:11:50.397178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.054 [2024-07-26 11:11:50.397186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.054 [2024-07-26 11:11:50.397192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.054 [2024-07-26 11:11:50.397200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.054 [2024-07-26 11:11:50.397206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.054 [2024-07-26 11:11:50.397214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.054 [2024-07-26 11:11:50.397220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.054 [2024-07-26 11:11:50.397228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.054 [2024-07-26 11:11:50.397235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.054 [2024-07-26 11:11:50.397243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.054 [2024-07-26 11:11:50.397249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.054 [2024-07-26 11:11:50.397257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.054 [2024-07-26 11:11:50.397263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.054 [2024-07-26 11:11:50.397271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.054 [2024-07-26 11:11:50.397277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.054 [2024-07-26 11:11:50.397285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.054 [2024-07-26 11:11:50.397292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.054 [2024-07-26 11:11:50.397302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.054 [2024-07-26 11:11:50.397308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.054 [2024-07-26 11:11:50.397316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.054 [2024-07-26 11:11:50.397322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.054 [2024-07-26 11:11:50.397330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.054 [2024-07-26 11:11:50.397337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.054 [2024-07-26 11:11:50.397345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.054 [2024-07-26 11:11:50.397351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.054 [2024-07-26 11:11:50.397359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.054 [2024-07-26 11:11:50.397365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.054 [2024-07-26 11:11:50.397374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.054 [2024-07-26 11:11:50.397381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.054 [2024-07-26 11:11:50.397389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.054 [2024-07-26 11:11:50.397395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.054 [2024-07-26 11:11:50.397403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.054 [2024-07-26 11:11:50.397409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.054 [2024-07-26 11:11:50.397417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.054 [2024-07-26 11:11:50.397423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.054 [2024-07-26 11:11:50.397431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.054 [2024-07-26 11:11:50.397437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.054 [2024-07-26 11:11:50.397446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.054 [2024-07-26 11:11:50.397452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.054 [2024-07-26 11:11:50.397460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.054 [2024-07-26 11:11:50.397466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.054 [2024-07-26 11:11:50.397473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d34c0 is same with the state(5) to be set 00:23:31.054 [2024-07-26 11:11:50.398471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.054 [2024-07-26 11:11:50.398482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.054 [2024-07-26 11:11:50.398491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.054 [2024-07-26 11:11:50.398498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.054 [2024-07-26 11:11:50.398506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.054 [2024-07-26 11:11:50.398513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.054 [2024-07-26 11:11:50.398521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.054 [2024-07-26 11:11:50.398527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.054 [2024-07-26 11:11:50.398535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.054 [2024-07-26 11:11:50.398542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.054 [2024-07-26 11:11:50.398550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.055 [2024-07-26 11:11:50.398557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.055 [2024-07-26 11:11:50.398565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.055 [2024-07-26 11:11:50.398571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.055 [2024-07-26 11:11:50.398580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.055 [2024-07-26 11:11:50.398587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.055 [2024-07-26 11:11:50.398594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.055 [2024-07-26 11:11:50.398602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.055 [2024-07-26 11:11:50.398610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.055 [2024-07-26 11:11:50.398616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.055 [2024-07-26 11:11:50.398624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.055 [2024-07-26 11:11:50.398631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.055 [2024-07-26 11:11:50.398639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.055 [2024-07-26 11:11:50.398645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.055 [2024-07-26 11:11:50.398653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.055 [2024-07-26 11:11:50.398662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.055 [2024-07-26 11:11:50.398670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.055 [2024-07-26 11:11:50.398676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.055 [2024-07-26 11:11:50.398684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.055 [2024-07-26 11:11:50.398690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.055 [2024-07-26 11:11:50.398698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.055 [2024-07-26 11:11:50.398705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.055 [2024-07-26 11:11:50.398713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.055 [2024-07-26 11:11:50.398719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.055 [2024-07-26 11:11:50.398727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.055 [2024-07-26 11:11:50.398733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.055 [2024-07-26 11:11:50.398741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.055 [2024-07-26 11:11:50.398747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.055 [2024-07-26 11:11:50.398755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.055 [2024-07-26 11:11:50.398761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.055 [2024-07-26 11:11:50.398770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.055 [2024-07-26 11:11:50.398776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.055 [2024-07-26 11:11:50.398784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.055 [2024-07-26 11:11:50.398791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.055 [2024-07-26 11:11:50.398798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.055 [2024-07-26 11:11:50.398804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.055 [2024-07-26 11:11:50.398813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.055 [2024-07-26 11:11:50.398819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.055 [2024-07-26 11:11:50.398827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.055 [2024-07-26 11:11:50.398833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.055 [2024-07-26 11:11:50.398843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.055 [2024-07-26 11:11:50.398850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.055 [2024-07-26 11:11:50.398858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.055 [2024-07-26 11:11:50.398864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.055 [2024-07-26 11:11:50.398872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.055 [2024-07-26 11:11:50.398878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.055 [2024-07-26 11:11:50.398886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.055 [2024-07-26 11:11:50.398892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.055 [2024-07-26 11:11:50.398900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.055 [2024-07-26 11:11:50.398907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.055 [2024-07-26 11:11:50.398915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.055 [2024-07-26 11:11:50.398921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.055 [2024-07-26 11:11:50.398929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.055 [2024-07-26 11:11:50.398935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.055 [2024-07-26 11:11:50.398943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.055 [2024-07-26 11:11:50.398949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.055 [2024-07-26 11:11:50.398957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.055 [2024-07-26 11:11:50.398964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.055 [2024-07-26 11:11:50.398971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.055 [2024-07-26 11:11:50.398978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.055 [2024-07-26 11:11:50.398986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.055 [2024-07-26 11:11:50.398992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.055 [2024-07-26 11:11:50.399000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.055 [2024-07-26 11:11:50.399006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.055 [2024-07-26 11:11:50.399014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.055 [2024-07-26 11:11:50.399025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.055 [2024-07-26 11:11:50.399033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.055 [2024-07-26 11:11:50.399039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.055 [2024-07-26 11:11:50.399054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.055 [2024-07-26 11:11:50.399061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.055 [2024-07-26 11:11:50.399069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.055 [2024-07-26 11:11:50.399075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.056 [2024-07-26 11:11:50.399083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.056 [2024-07-26 11:11:50.399089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.056 [2024-07-26 11:11:50.399098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.056 [2024-07-26 11:11:50.399104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.056 [2024-07-26 11:11:50.399112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.056 [2024-07-26 11:11:50.399119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.056 [2024-07-26 11:11:50.399126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.056 [2024-07-26 11:11:50.399134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.056 [2024-07-26 11:11:50.399142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.056 [2024-07-26 11:11:50.399148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.056 [2024-07-26 11:11:50.399156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.056 [2024-07-26 11:11:50.399163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.056 [2024-07-26 11:11:50.399170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.056 [2024-07-26 11:11:50.399177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.056 [2024-07-26 11:11:50.399185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.056 [2024-07-26 11:11:50.399191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.056 [2024-07-26 11:11:50.399199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.056 [2024-07-26 11:11:50.399205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.056 [2024-07-26 11:11:50.399215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.056 [2024-07-26 11:11:50.399221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.056 [2024-07-26 11:11:50.399229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.056 [2024-07-26 11:11:50.399235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.056 [2024-07-26 11:11:50.399244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.056 [2024-07-26 11:11:50.399250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.056 [2024-07-26 11:11:50.399258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.056 [2024-07-26 11:11:50.399264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.056 [2024-07-26 11:11:50.399272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.056 [2024-07-26 11:11:50.399278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.056 [2024-07-26 11:11:50.399286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.056 [2024-07-26 11:11:50.399292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.056 [2024-07-26 11:11:50.399300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.056 [2024-07-26 11:11:50.399306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.056 [2024-07-26 11:11:50.399314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.056 [2024-07-26 11:11:50.399320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.056 [2024-07-26 11:11:50.399328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.056 [2024-07-26 11:11:50.399334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.056 [2024-07-26 11:11:50.399342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.056 [2024-07-26 11:11:50.399348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.056 [2024-07-26 11:11:50.399356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.056 [2024-07-26 11:11:50.399363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.056 [2024-07-26 11:11:50.399371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.056 [2024-07-26 11:11:50.399377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.056 [2024-07-26 11:11:50.399385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.056 [2024-07-26 11:11:50.399393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.056 [2024-07-26 11:11:50.399401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.056 [2024-07-26 11:11:50.399407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.056 [2024-07-26 11:11:50.399414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220b7a0 is same with the state(5) to be set 00:23:31.056 [2024-07-26 11:11:50.400423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.056 [2024-07-26 11:11:50.400434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.056 [2024-07-26 11:11:50.400444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.056 [2024-07-26 11:11:50.400450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.056 [2024-07-26 11:11:50.400458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.056 [2024-07-26 11:11:50.400465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.056 [2024-07-26 11:11:50.400473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.056 [2024-07-26 11:11:50.400480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.056 [2024-07-26 11:11:50.400488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.056 [2024-07-26 11:11:50.400494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.056 [2024-07-26 11:11:50.400502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.056 [2024-07-26 11:11:50.400508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.056 [2024-07-26 11:11:50.400516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.056 [2024-07-26 11:11:50.400523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.056 [2024-07-26 11:11:50.400530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.056 [2024-07-26 11:11:50.400537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.056 [2024-07-26 11:11:50.400545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.056 [2024-07-26 11:11:50.400551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.056 [2024-07-26 11:11:50.400559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.056 [2024-07-26 11:11:50.400565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.056 [2024-07-26 11:11:50.400573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.056 [2024-07-26 11:11:50.400582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.056 [2024-07-26 11:11:50.400590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.056 [2024-07-26 11:11:50.400596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.056 [2024-07-26 11:11:50.400604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.056 [2024-07-26 11:11:50.400611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.056 [2024-07-26 11:11:50.400618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.056 [2024-07-26 11:11:50.400624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.057 [2024-07-26 11:11:50.400634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.057 [2024-07-26 11:11:50.400641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.057 [2024-07-26 11:11:50.400649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.057 [2024-07-26 11:11:50.400655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.057 [2024-07-26 11:11:50.400663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.057 [2024-07-26 11:11:50.400669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.057 [2024-07-26 11:11:50.400677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.057 [2024-07-26 11:11:50.400683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.057 [2024-07-26 11:11:50.400691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.057 [2024-07-26 11:11:50.400698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.057 [2024-07-26 11:11:50.400706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.057 [2024-07-26 11:11:50.400712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.057 [2024-07-26 11:11:50.400720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.057 [2024-07-26 11:11:50.400726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.057 [2024-07-26 11:11:50.400734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.057 [2024-07-26 11:11:50.400740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.057 [2024-07-26 11:11:50.400748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.057 [2024-07-26 11:11:50.400754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.057 [2024-07-26 11:11:50.400763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.057 [2024-07-26 11:11:50.400770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.057 [2024-07-26 11:11:50.400778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.057 [2024-07-26 11:11:50.400784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.057 [2024-07-26 11:11:50.400792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.057 [2024-07-26 11:11:50.400798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.057 [2024-07-26 11:11:50.400806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.057 [2024-07-26 11:11:50.400812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.057 [2024-07-26 11:11:50.400821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.057 [2024-07-26 11:11:50.400827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.057 [2024-07-26 11:11:50.400834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.057 [2024-07-26 11:11:50.400841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.057 [2024-07-26 11:11:50.400848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.057 [2024-07-26 11:11:50.400855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.057 [2024-07-26 11:11:50.400863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.057 [2024-07-26 11:11:50.400870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.057 [2024-07-26 11:11:50.400878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.057 [2024-07-26 11:11:50.400884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.057 [2024-07-26 11:11:50.400892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.057 [2024-07-26 11:11:50.400899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.057 [2024-07-26 11:11:50.400907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.057 [2024-07-26 11:11:50.400914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.057 [2024-07-26 11:11:50.400921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.057 [2024-07-26 11:11:50.400928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.057 [2024-07-26 11:11:50.400936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.057 [2024-07-26 11:11:50.400943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.057 [2024-07-26 11:11:50.400951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.057 [2024-07-26 11:11:50.400957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.057 [2024-07-26 11:11:50.400965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.057 [2024-07-26 11:11:50.400972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.057 [2024-07-26 11:11:50.400980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.057 [2024-07-26 11:11:50.400986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.057 [2024-07-26 11:11:50.400994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.057 [2024-07-26 11:11:50.401000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.057 [2024-07-26 11:11:50.401008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.057 [2024-07-26 11:11:50.401014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.057 [2024-07-26 11:11:50.401022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.057 [2024-07-26 11:11:50.401028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.057 [2024-07-26 11:11:50.401036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.057 [2024-07-26 11:11:50.401229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.057 [2024-07-26 11:11:50.401239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.057 [2024-07-26 11:11:50.401247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.057 [2024-07-26 11:11:50.401255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.057 [2024-07-26 11:11:50.401261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.057 [2024-07-26 11:11:50.401269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.057 [2024-07-26 11:11:50.401276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.057 [2024-07-26 11:11:50.401284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.057 [2024-07-26 11:11:50.401290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.057 [2024-07-26 11:11:50.401298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.057 [2024-07-26 11:11:50.401304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.057 [2024-07-26 11:11:50.401314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.057 [2024-07-26 11:11:50.401320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.057 [2024-07-26 11:11:50.401328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.057 [2024-07-26 11:11:50.401334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.057 [2024-07-26 11:11:50.401342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.057 [2024-07-26 11:11:50.401349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.057 [2024-07-26 11:11:50.401356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.057 [2024-07-26 11:11:50.401363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.057 [2024-07-26 11:11:50.401371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.057 [2024-07-26 11:11:50.401378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.057 [2024-07-26 11:11:50.401386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.058 [2024-07-26 11:11:50.401393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.058 [2024-07-26 11:11:50.401401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.058 [2024-07-26 11:11:50.401407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.058 [2024-07-26 11:11:50.401415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.058 [2024-07-26 11:11:50.401422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.058 [2024-07-26 11:11:50.401429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.058 [2024-07-26 11:11:50.401436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.058 [2024-07-26 11:11:50.401444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.058 [2024-07-26 11:11:50.401450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.058 [2024-07-26 11:11:50.401458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.058 [2024-07-26 11:11:50.401465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.058 [2024-07-26 11:11:50.401472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.058 [2024-07-26 11:11:50.401479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.058 [2024-07-26 11:11:50.401487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.058 [2024-07-26 11:11:50.401494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.058 [2024-07-26 11:11:50.401502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.058 [2024-07-26 11:11:50.401509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.058 [2024-07-26 11:11:50.401517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.058 [2024-07-26 11:11:50.401523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.058 [2024-07-26 11:11:50.401530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.058 [2024-07-26 11:11:50.401537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.058 [2024-07-26 11:11:50.401544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2b3e610 is same with the state(5) to be set 00:23:31.058 [2024-07-26 11:11:50.402555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.058 [2024-07-26 11:11:50.402569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.058 [2024-07-26 11:11:50.402580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.058 [2024-07-26 11:11:50.402586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.058 [2024-07-26 11:11:50.402594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.058 [2024-07-26 11:11:50.402601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.058 [2024-07-26 11:11:50.402609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.058 [2024-07-26 11:11:50.402615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.058 [2024-07-26 11:11:50.402624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.058 [2024-07-26 11:11:50.402630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.058 [2024-07-26 11:11:50.402639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.058 [2024-07-26 11:11:50.402645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.058 [2024-07-26 11:11:50.402653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.058 [2024-07-26 11:11:50.402659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.058 [2024-07-26 11:11:50.402670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.058 [2024-07-26 11:11:50.402677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.058 [2024-07-26 11:11:50.402685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.058 [2024-07-26 11:11:50.402691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.324 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:23:31.324 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:23:31.324 [2024-07-26 11:11:50.747426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.324 [2024-07-26 11:11:50.747463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.324 [2024-07-26 11:11:50.747479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.324 [2024-07-26 11:11:50.747491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.324 [2024-07-26 11:11:50.747503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.324 [2024-07-26 11:11:50.747514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.324 [2024-07-26 11:11:50.747526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.324 [2024-07-26 11:11:50.747536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.324 [2024-07-26 11:11:50.747547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.324 [2024-07-26 11:11:50.747557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.324 [2024-07-26 11:11:50.747570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.324 [2024-07-26 11:11:50.747580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.324 [2024-07-26 11:11:50.747591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.324 [2024-07-26 11:11:50.747601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.324 [2024-07-26 11:11:50.747612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.325 [2024-07-26 11:11:50.747623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.325 [2024-07-26 11:11:50.747634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.325 [2024-07-26 11:11:50.747645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.325 [2024-07-26 11:11:50.747658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.325 [2024-07-26 11:11:50.747667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.325 [2024-07-26 11:11:50.747679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.325 [2024-07-26 11:11:50.747689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.325 [2024-07-26 11:11:50.747701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.325 [2024-07-26 11:11:50.747715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.325 [2024-07-26 11:11:50.747727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.325 [2024-07-26 11:11:50.747737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.325 [2024-07-26 11:11:50.747750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.325 [2024-07-26 11:11:50.747759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.325 [2024-07-26 11:11:50.747773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.325 [2024-07-26 11:11:50.747783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.325 [2024-07-26 11:11:50.747795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.325 [2024-07-26 11:11:50.747805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.325 [2024-07-26 11:11:50.747817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.325 [2024-07-26 11:11:50.747826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.325 [2024-07-26 11:11:50.747838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.325 [2024-07-26 11:11:50.747848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.325 [2024-07-26 11:11:50.747861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.325 [2024-07-26 11:11:50.747871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.325 [2024-07-26 11:11:50.747883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.325 [2024-07-26 11:11:50.747892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.325 [2024-07-26 11:11:50.747904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.325 [2024-07-26 11:11:50.747914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.325 [2024-07-26 11:11:50.747927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.325 [2024-07-26 11:11:50.747937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.325 [2024-07-26 11:11:50.747949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.325 [2024-07-26 11:11:50.747959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.325 [2024-07-26 11:11:50.747971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.325 [2024-07-26 11:11:50.747982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.325 [2024-07-26 11:11:50.747996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.325 [2024-07-26 11:11:50.748006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.325 [2024-07-26 11:11:50.748018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.325 [2024-07-26 11:11:50.748027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.325 [2024-07-26 11:11:50.748040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.325 [2024-07-26 11:11:50.748065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.325 [2024-07-26 11:11:50.748078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.325 [2024-07-26 11:11:50.748088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.325 [2024-07-26 11:11:50.748099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.325 [2024-07-26 11:11:50.748109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.325 [2024-07-26 11:11:50.748121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.325 [2024-07-26 11:11:50.748131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.325 [2024-07-26 11:11:50.748144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.325 [2024-07-26 11:11:50.748154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.325 [2024-07-26 11:11:50.748166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.325 [2024-07-26 11:11:50.748176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.325 [2024-07-26 11:11:50.748188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.325 [2024-07-26 11:11:50.748198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.325 [2024-07-26 11:11:50.748210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.325 [2024-07-26 11:11:50.748220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.325 [2024-07-26 11:11:50.748232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.325 [2024-07-26 11:11:50.748241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.325 [2024-07-26 11:11:50.748253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.325 [2024-07-26 11:11:50.748262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.325 [2024-07-26 11:11:50.748274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.325 [2024-07-26 11:11:50.748287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.325 [2024-07-26 11:11:50.748299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.325 [2024-07-26 11:11:50.748309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.325 [2024-07-26 11:11:50.748320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.325 [2024-07-26 11:11:50.748330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.325 [2024-07-26 11:11:50.748342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.325 [2024-07-26 11:11:50.748351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.325 [2024-07-26 11:11:50.748363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.325 [2024-07-26 11:11:50.748373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.325 [2024-07-26 11:11:50.748385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.325 [2024-07-26 11:11:50.748394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.325 [2024-07-26 11:11:50.748406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.325 [2024-07-26 11:11:50.748417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.325 [2024-07-26 11:11:50.748429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.325 [2024-07-26 11:11:50.748438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.325 [2024-07-26 11:11:50.748451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.326 [2024-07-26 11:11:50.748460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.326 [2024-07-26 11:11:50.748472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.326 [2024-07-26 11:11:50.748481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.326 [2024-07-26 11:11:50.748493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.326 [2024-07-26 11:11:50.748503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.326 [2024-07-26 11:11:50.748515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.326 [2024-07-26 11:11:50.748524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.326 [2024-07-26 11:11:50.748537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.326 [2024-07-26 11:11:50.748547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.326 [2024-07-26 11:11:50.748561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.326 [2024-07-26 11:11:50.748571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.326 [2024-07-26 11:11:50.748583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.326 [2024-07-26 11:11:50.748593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.326 [2024-07-26 11:11:50.748605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.326 [2024-07-26 11:11:50.748616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.326 [2024-07-26 11:11:50.748628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.326 [2024-07-26 11:11:50.748638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.326 [2024-07-26 11:11:50.748650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.326 [2024-07-26 11:11:50.748659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.326 [2024-07-26 11:11:50.748671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.326 [2024-07-26 11:11:50.748681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.326 [2024-07-26 11:11:50.748692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2ce6050 is same with the state(5) to be set 00:23:31.326 [2024-07-26 11:11:50.750154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.326 [2024-07-26 11:11:50.750173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.326 [2024-07-26 11:11:50.750193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.326 [2024-07-26 11:11:50.750202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.326 [2024-07-26 11:11:50.750214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.326 [2024-07-26 11:11:50.750224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.326 [2024-07-26 11:11:50.750236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.326 [2024-07-26 11:11:50.750246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.326 [2024-07-26 11:11:50.750258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.326 [2024-07-26 11:11:50.750268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.326 [2024-07-26 11:11:50.750281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.326 [2024-07-26 11:11:50.750291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.326 [2024-07-26 11:11:50.750304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.326 [2024-07-26 11:11:50.750317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.326 [2024-07-26 11:11:50.750331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.326 [2024-07-26 11:11:50.750340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.326 [2024-07-26 11:11:50.750353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.326 [2024-07-26 11:11:50.750363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.326 [2024-07-26 11:11:50.750375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.326 [2024-07-26 11:11:50.750385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.326 [2024-07-26 11:11:50.750398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.326 [2024-07-26 11:11:50.750408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.326 [2024-07-26 11:11:50.750421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.326 [2024-07-26 11:11:50.750431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.326 [2024-07-26 11:11:50.750444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.326 [2024-07-26 11:11:50.750453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.326 [2024-07-26 11:11:50.750465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.326 [2024-07-26 11:11:50.750475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.326 [2024-07-26 11:11:50.750487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.326 [2024-07-26 11:11:50.750497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.326 [2024-07-26 11:11:50.750510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.326 [2024-07-26 11:11:50.750520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.326 [2024-07-26 11:11:50.750531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.326 [2024-07-26 11:11:50.750541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.326 [2024-07-26 11:11:50.750554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.326 [2024-07-26 11:11:50.750564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.326 [2024-07-26 11:11:50.750575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.326 [2024-07-26 11:11:50.750585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.326 [2024-07-26 11:11:50.750598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.326 [2024-07-26 11:11:50.750608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.326 [2024-07-26 11:11:50.750621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.326 [2024-07-26 11:11:50.750631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.326 [2024-07-26 11:11:50.750642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.326 [2024-07-26 11:11:50.750652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.326 [2024-07-26 11:11:50.750664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.326 [2024-07-26 11:11:50.750676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.326 [2024-07-26 11:11:50.750688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.326 [2024-07-26 11:11:50.750698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.326 [2024-07-26 11:11:50.750709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.326 [2024-07-26 11:11:50.750719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.326 [2024-07-26 11:11:50.750732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.326 [2024-07-26 11:11:50.750743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.326 [2024-07-26 11:11:50.750755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.326 [2024-07-26 11:11:50.750765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.327 [2024-07-26 11:11:50.750777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.327 [2024-07-26 11:11:50.750786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.327 [2024-07-26 11:11:50.750798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.327 [2024-07-26 11:11:50.750807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.327 [2024-07-26 11:11:50.750819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.327 [2024-07-26 11:11:50.750829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.327 [2024-07-26 11:11:50.750842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.327 [2024-07-26 11:11:50.750852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.327 [2024-07-26 11:11:50.750866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.327 [2024-07-26 11:11:50.750879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.327 [2024-07-26 11:11:50.750891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.327 [2024-07-26 11:11:50.750902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.327 [2024-07-26 11:11:50.750913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.327 [2024-07-26 11:11:50.750923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.327 [2024-07-26 11:11:50.750936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.327 [2024-07-26 11:11:50.750946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.327 [2024-07-26 11:11:50.750958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.327 [2024-07-26 11:11:50.750968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.327 [2024-07-26 11:11:50.750980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.327 [2024-07-26 11:11:50.750990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.327 [2024-07-26 11:11:50.751003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.327 [2024-07-26 11:11:50.751013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.327 [2024-07-26 11:11:50.751026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.327 [2024-07-26 11:11:50.751036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.327 [2024-07-26 11:11:50.751055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.327 [2024-07-26 11:11:50.751064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.327 [2024-07-26 11:11:50.751076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.327 [2024-07-26 11:11:50.751086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.327 [2024-07-26 11:11:50.751098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.327 [2024-07-26 11:11:50.751108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.327 [2024-07-26 11:11:50.751120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.327 [2024-07-26 11:11:50.751130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.327 [2024-07-26 11:11:50.751141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.327 [2024-07-26 11:11:50.751151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.327 [2024-07-26 11:11:50.751166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.327 [2024-07-26 11:11:50.751176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.327 [2024-07-26 11:11:50.751187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.327 [2024-07-26 11:11:50.751198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.327 [2024-07-26 11:11:50.751210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.327 [2024-07-26 11:11:50.751220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.327 [2024-07-26 11:11:50.751232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.327 [2024-07-26 11:11:50.751242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.327 [2024-07-26 11:11:50.751254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.327 [2024-07-26 11:11:50.751264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.327 [2024-07-26 11:11:50.751276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.327 [2024-07-26 11:11:50.751286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.327 [2024-07-26 11:11:50.751298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.327 [2024-07-26 11:11:50.751308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.327 [2024-07-26 11:11:50.751320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.327 [2024-07-26 11:11:50.751329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.327 [2024-07-26 11:11:50.751341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.327 [2024-07-26 11:11:50.751351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.327 [2024-07-26 11:11:50.751362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.327 [2024-07-26 11:11:50.751372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.327 [2024-07-26 11:11:50.751384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.327 [2024-07-26 11:11:50.751394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.327 [2024-07-26 11:11:50.751407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.327 [2024-07-26 11:11:50.751417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.327 [2024-07-26 11:11:50.751430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.327 [2024-07-26 11:11:50.751442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.327 [2024-07-26 11:11:50.751454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.327 [2024-07-26 11:11:50.751464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.327 [2024-07-26 11:11:50.751476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.327 [2024-07-26 11:11:50.751487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.327 [2024-07-26 11:11:50.751499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.327 [2024-07-26 11:11:50.751509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.327 [2024-07-26 11:11:50.751521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.327 [2024-07-26 11:11:50.751531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.327 [2024-07-26 11:11:50.751543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.327 [2024-07-26 11:11:50.751553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.327 [2024-07-26 11:11:50.751565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.327 [2024-07-26 11:11:50.751575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.327 [2024-07-26 11:11:50.751587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.327 [2024-07-26 11:11:50.751597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.327 [2024-07-26 11:11:50.751608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ca740 is same with the state(5) to be set 00:23:31.327 [2024-07-26 11:11:50.753266] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:31.328 [2024-07-26 11:11:50.753291] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:31.328 [2024-07-26 11:11:50.753305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:31.328 [2024-07-26 11:11:50.753318] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:31.328 [2024-07-26 11:11:50.753419] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:31.328 [2024-07-26 11:11:50.753438] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:31.328 [2024-07-26 11:11:50.753543] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:31.328 task offset: 22400 on job bdev=Nvme6n1 fails 00:23:31.328 00:23:31.328 Latency(us) 00:23:31.328 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.328 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.328 Job: Nvme1n1 ended in about 0.91 seconds with error 00:23:31.328 Verification LBA range: start 0x0 length 0x400 00:23:31.328 Nvme1n1 : 0.91 210.87 13.18 70.29 0.00 225325.19 22225.25 235245.75 00:23:31.328 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.328 Job: Nvme2n1 ended in about 0.92 seconds with error 00:23:31.328 Verification LBA range: start 0x0 length 0x400 00:23:31.328 Nvme2n1 : 0.92 208.35 13.02 69.45 0.00 224089.04 32824.99 253481.85 00:23:31.328 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.328 Job: Nvme3n1 ended in about 0.92 seconds with error 00:23:31.328 Verification LBA range: start 0x0 length 0x400 00:23:31.328 Nvme3n1 : 0.92 138.58 8.66 69.29 0.00 294310.36 22795.13 260776.29 00:23:31.328 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.328 Job: Nvme4n1 ended in about 0.91 seconds with error 00:23:31.328 Verification LBA range: start 0x0 length 0x400 00:23:31.328 Nvme4n1 : 0.91 210.17 13.14 70.06 0.00 214193.64 6411.13 240716.58 00:23:31.328 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.328 Job: Nvme5n1 ended in about 0.93 seconds with error 00:23:31.328 Verification LBA range: start 0x0 length 0x400 00:23:31.328 Nvme5n1 : 0.93 138.29 8.64 69.15 0.00 284331.85 21769.35 255305.46 00:23:31.328 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.328 Job: Nvme6n1 ended in about 0.91 seconds with error 00:23:31.328 Verification LBA range: start 0x0 length 0x400 00:23:31.328 Nvme6n1 : 0.91 141.02 8.81 70.51 0.00 273058.95 15614.66 302719.33 00:23:31.328 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.328 Job: Nvme7n1 ended in about 0.93 seconds with error 00:23:31.328 Verification LBA range: start 0x0 length 0x400 00:23:31.328 Nvme7n1 : 0.93 206.96 12.94 68.99 0.00 205799.96 21883.33 220656.86 00:23:31.328 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.328 Job: Nvme8n1 ended in about 1.27 seconds with error 00:23:31.328 Verification LBA range: start 0x0 length 0x400 00:23:31.328 Nvme8n1 : 1.27 150.69 9.42 50.23 0.00 288461.69 35788.35 536141.47 00:23:31.328 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.328 Job: Nvme9n1 ended in about 1.28 seconds with error 00:23:31.328 Verification LBA range: start 0x0 length 0x400 00:23:31.328 Nvme9n1 : 1.28 150.35 9.40 50.12 0.00 285203.81 24732.72 496022.04 00:23:31.328 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.328 Job: Nvme10n1 ended in about 0.91 seconds with error 00:23:31.328 Verification LBA range: start 0x0 length 0x400 00:23:31.328 Nvme10n1 : 0.91 211.20 13.20 70.40 0.00 189224.29 22225.25 237069.36 00:23:31.328 =================================================================================================================== 00:23:31.328 Total : 1766.49 110.41 658.48 0.00 245521.46 6411.13 536141.47 00:23:31.328 [2024-07-26 11:11:50.777624] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:31.328 [2024-07-26 11:11:50.777654] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:23:31.328 [2024-07-26 11:11:50.778231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.328 [2024-07-26 11:11:50.778250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23dc910 with addr=10.0.0.2, port=4420 00:23:31.328 [2024-07-26 11:11:50.778260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc910 is same with the state(5) to be set 00:23:31.328 [2024-07-26 11:11:50.778716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.328 [2024-07-26 11:11:50.778728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x223c9e0 with addr=10.0.0.2, port=4420 00:23:31.328 [2024-07-26 11:11:50.778735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223c9e0 is same with the state(5) to be set 00:23:31.328 [2024-07-26 11:11:50.779234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.328 [2024-07-26 11:11:50.779245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d5f340 with addr=10.0.0.2, port=4420 00:23:31.328 [2024-07-26 11:11:50.779257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5f340 is same with the state(5) to be set 00:23:31.328 [2024-07-26 11:11:50.779626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.328 [2024-07-26 11:11:50.779637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ba880 with addr=10.0.0.2, port=4420 00:23:31.328 [2024-07-26 11:11:50.779644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba880 is same with the state(5) to be set 00:23:31.328 [2024-07-26 11:11:50.780993] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:31.328 [2024-07-26 11:11:50.781010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:31.328 [2024-07-26 11:11:50.781021] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:31.328 [2024-07-26 11:11:50.781029] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:31.328 [2024-07-26 11:11:50.781552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.328 [2024-07-26 11:11:50.781567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23cc3e0 with addr=10.0.0.2, port=4420 00:23:31.328 [2024-07-26 11:11:50.781575] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cc3e0 is same with the state(5) to be set 00:23:31.328 [2024-07-26 11:11:50.781976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.328 [2024-07-26 11:11:50.781987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23c5ea0 with addr=10.0.0.2, port=4420 00:23:31.328 [2024-07-26 11:11:50.781994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c5ea0 is same with the state(5) to be set 00:23:31.328 [2024-07-26 11:11:50.782007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23dc910 (9): Bad file descriptor 00:23:31.328 [2024-07-26 11:11:50.782017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223c9e0 (9): Bad file descriptor 00:23:31.328 [2024-07-26 11:11:50.782027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d5f340 (9): Bad file descriptor 00:23:31.328 [2024-07-26 11:11:50.782037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ba880 (9): Bad file descriptor 00:23:31.328 [2024-07-26 11:11:50.782085] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:31.328 [2024-07-26 11:11:50.782097] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:31.328 [2024-07-26 11:11:50.782108] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:31.328 [2024-07-26 11:11:50.782118] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:31.328 [2024-07-26 11:11:50.782663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.328 [2024-07-26 11:11:50.782676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2210c70 with addr=10.0.0.2, port=4420 00:23:31.328 [2024-07-26 11:11:50.782684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2210c70 is same with the state(5) to be set 00:23:31.328 [2024-07-26 11:11:50.783181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.328 [2024-07-26 11:11:50.783193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23c6270 with addr=10.0.0.2, port=4420 00:23:31.328 [2024-07-26 11:11:50.783201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c6270 is same with the state(5) to be set 00:23:31.328 [2024-07-26 11:11:50.783647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.328 [2024-07-26 11:11:50.783661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ba6a0 with addr=10.0.0.2, port=4420 00:23:31.328 [2024-07-26 11:11:50.783668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ba6a0 is same with the state(5) to be set 00:23:31.328 [2024-07-26 11:11:50.784056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.328 [2024-07-26 11:11:50.784068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22335e0 with addr=10.0.0.2, port=4420 00:23:31.328 [2024-07-26 11:11:50.784075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22335e0 is same with the state(5) to be set 00:23:31.328 [2024-07-26 11:11:50.784084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23cc3e0 (9): Bad file descriptor 00:23:31.328 [2024-07-26 11:11:50.784094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c5ea0 (9): Bad file descriptor 00:23:31.328 [2024-07-26 11:11:50.784102] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:31.328 [2024-07-26 11:11:50.784109] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:31.328 [2024-07-26 11:11:50.784117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:31.328 [2024-07-26 11:11:50.784129] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:31.328 [2024-07-26 11:11:50.784136] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:31.328 [2024-07-26 11:11:50.784142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:31.329 [2024-07-26 11:11:50.784154] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:31.329 [2024-07-26 11:11:50.784161] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:31.329 [2024-07-26 11:11:50.784168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:31.329 [2024-07-26 11:11:50.784178] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:31.329 [2024-07-26 11:11:50.784185] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:31.329 [2024-07-26 11:11:50.784191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:31.329 [2024-07-26 11:11:50.784260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:31.329 [2024-07-26 11:11:50.784269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:31.329 [2024-07-26 11:11:50.784275] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:31.329 [2024-07-26 11:11:50.784281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:31.329 [2024-07-26 11:11:50.784289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2210c70 (9): Bad file descriptor 00:23:31.329 [2024-07-26 11:11:50.784297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c6270 (9): Bad file descriptor 00:23:31.329 [2024-07-26 11:11:50.784305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ba6a0 (9): Bad file descriptor 00:23:31.329 [2024-07-26 11:11:50.784314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22335e0 (9): Bad file descriptor 00:23:31.329 [2024-07-26 11:11:50.784322] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:23:31.329 [2024-07-26 11:11:50.784329] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:23:31.329 [2024-07-26 11:11:50.784337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:31.329 [2024-07-26 11:11:50.784348] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:23:31.329 [2024-07-26 11:11:50.784355] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:23:31.329 [2024-07-26 11:11:50.784362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:23:31.329 [2024-07-26 11:11:50.784387] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:31.329 [2024-07-26 11:11:50.784394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:31.329 [2024-07-26 11:11:50.784400] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:31.329 [2024-07-26 11:11:50.784407] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:31.329 [2024-07-26 11:11:50.784414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:31.329 [2024-07-26 11:11:50.784422] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:31.329 [2024-07-26 11:11:50.784429] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:31.329 [2024-07-26 11:11:50.784436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:31.329 [2024-07-26 11:11:50.784445] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:31.329 [2024-07-26 11:11:50.784452] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:31.329 [2024-07-26 11:11:50.784459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:31.329 [2024-07-26 11:11:50.784467] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:31.329 [2024-07-26 11:11:50.784473] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:31.329 [2024-07-26 11:11:50.784480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:31.329 [2024-07-26 11:11:50.784505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:31.329 [2024-07-26 11:11:50.784513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:31.329 [2024-07-26 11:11:50.784519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:31.329 [2024-07-26 11:11:50.784525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:32.272 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1517980 00:23:32.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1517980) - No such process 00:23:32.272 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:23:32.272 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:23:32.272 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:32.272 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:32.272 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:32.272 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:32.272 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:32.272 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:23:32.272 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:32.272 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:23:32.272 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:32.272 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:32.533 rmmod nvme_tcp 00:23:32.533 rmmod nvme_fabrics 00:23:32.533 rmmod nvme_keyring 00:23:32.533 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:32.533 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:23:32.533 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:23:32.533 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:32.533 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:32.533 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:32.533 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:32.533 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:32.533 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:32.533 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:32.533 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:32.533 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:34.443 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:34.443 00:23:34.443 real 0m7.792s 00:23:34.443 user 0m19.148s 00:23:34.443 sys 0m1.369s 00:23:34.443 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:34.443 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:34.443 ************************************ 00:23:34.443 END TEST nvmf_shutdown_tc3 00:23:34.443 ************************************ 00:23:34.443 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:23:34.443 00:23:34.443 real 0m31.185s 00:23:34.443 user 1m18.193s 00:23:34.443 sys 0m8.409s 00:23:34.443 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:34.443 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:34.443 ************************************ 00:23:34.443 END TEST nvmf_shutdown 00:23:34.443 ************************************ 00:23:34.702 11:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:23:34.702 00:23:34.702 real 10m39.945s 00:23:34.702 user 23m50.666s 00:23:34.702 sys 3m1.837s 00:23:34.702 11:11:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:34.702 11:11:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:34.702 ************************************ 00:23:34.702 END TEST nvmf_target_extra 00:23:34.702 ************************************ 00:23:34.702 11:11:53 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:34.702 11:11:53 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:34.702 11:11:53 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:34.702 11:11:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:34.702 ************************************ 00:23:34.702 START TEST nvmf_host 00:23:34.702 ************************************ 00:23:34.702 11:11:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:34.702 * Looking for test storage... 00:23:34.702 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:23:34.702 11:11:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:34.702 11:11:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:23:34.702 11:11:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:34.702 11:11:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:34.702 11:11:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:34.702 11:11:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:34.702 11:11:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:34.702 11:11:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:34.702 11:11:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:34.702 11:11:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:34.702 11:11:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:34.702 11:11:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:34.702 11:11:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:34.703 11:11:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:34.703 11:11:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:34.703 11:11:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:34.703 11:11:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:34.703 11:11:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:34.703 11:11:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:34.703 11:11:54 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:34.703 11:11:54 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:34.703 11:11:54 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:34.703 11:11:54 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.703 11:11:54 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.703 11:11:54 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.703 11:11:54 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:23:34.703 11:11:54 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.703 11:11:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:23:34.703 11:11:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:34.703 11:11:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:34.703 11:11:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:34.703 11:11:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:34.703 11:11:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:34.703 11:11:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:34.703 11:11:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:34.703 11:11:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:34.703 11:11:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:34.703 11:11:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:23:34.703 11:11:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:23:34.703 11:11:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:34.703 11:11:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:34.703 11:11:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:34.703 11:11:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.703 ************************************ 00:23:34.703 START TEST nvmf_multicontroller 00:23:34.703 ************************************ 00:23:34.703 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:34.963 * Looking for test storage... 00:23:34.963 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:34.963 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:34.963 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:34.963 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:34.963 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:34.963 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:34.963 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:34.963 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:34.963 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:34.963 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:34.963 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:34.963 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:34.963 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:34.963 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:34.963 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:34.963 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:34.963 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:34.963 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:34.963 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:34.963 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:34.963 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:34.963 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:34.963 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:34.964 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.964 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.964 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.964 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:34.964 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.964 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:23:34.964 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:34.964 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:34.964 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:34.964 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:34.964 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:34.964 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:34.964 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:34.964 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:34.964 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:34.964 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:34.964 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:34.964 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:34.964 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:34.964 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:34.964 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:34.964 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:34.964 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:34.964 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:34.964 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:34.964 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:34.964 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.964 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:34.964 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:34.964 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:34.964 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:34.964 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:23:34.964 11:11:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:40.278 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:40.278 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:40.278 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:40.279 Found net devices under 0000:86:00.0: cvl_0_0 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:40.279 Found net devices under 0000:86:00.1: cvl_0_1 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:40.279 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:40.279 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:23:40.279 00:23:40.279 --- 10.0.0.2 ping statistics --- 00:23:40.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.279 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:40.279 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:40.279 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:23:40.279 00:23:40.279 --- 10.0.0.1 ping statistics --- 00:23:40.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.279 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1522267 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1522267 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1522267 ']' 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:40.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:40.279 11:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.279 [2024-07-26 11:11:59.754348] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:40.279 [2024-07-26 11:11:59.754392] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:40.540 EAL: No free 2048 kB hugepages reported on node 1 00:23:40.540 [2024-07-26 11:11:59.812986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:40.540 [2024-07-26 11:11:59.890647] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:40.540 [2024-07-26 11:11:59.890685] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:40.540 [2024-07-26 11:11:59.890692] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:40.540 [2024-07-26 11:11:59.890697] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:40.540 [2024-07-26 11:11:59.890702] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:40.540 [2024-07-26 11:11:59.890803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:40.540 [2024-07-26 11:11:59.890888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:40.540 [2024-07-26 11:11:59.890890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:41.111 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:41.111 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:23:41.111 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:41.111 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:41.111 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.111 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:41.111 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:41.111 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.111 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.111 [2024-07-26 11:12:00.606037] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:41.371 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.371 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:41.371 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.372 Malloc0 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.372 [2024-07-26 11:12:00.669510] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.372 [2024-07-26 11:12:00.677450] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.372 Malloc1 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1522498 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1522498 /var/tmp/bdevperf.sock 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1522498 ']' 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:41.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:41.372 11:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.312 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:42.312 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:23:42.312 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:42.312 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.312 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.312 NVMe0n1 00:23:42.312 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.312 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:42.312 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:42.312 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.312 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.312 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.312 1 00:23:42.312 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:42.312 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:42.312 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:42.312 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:42.312 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:42.312 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:42.312 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:42.313 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:42.313 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.313 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.313 request: 00:23:42.313 { 00:23:42.313 "name": "NVMe0", 00:23:42.313 "trtype": "tcp", 00:23:42.313 "traddr": "10.0.0.2", 00:23:42.313 "adrfam": "ipv4", 00:23:42.313 "trsvcid": "4420", 00:23:42.313 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.313 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:42.313 "hostaddr": "10.0.0.2", 00:23:42.313 "hostsvcid": "60000", 00:23:42.313 "prchk_reftag": false, 00:23:42.313 "prchk_guard": false, 00:23:42.313 "hdgst": false, 00:23:42.313 "ddgst": false, 00:23:42.313 "method": "bdev_nvme_attach_controller", 00:23:42.313 "req_id": 1 00:23:42.313 } 00:23:42.313 Got JSON-RPC error response 00:23:42.313 response: 00:23:42.313 { 00:23:42.313 "code": -114, 00:23:42.313 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:42.313 } 00:23:42.313 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:42.313 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:42.313 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:42.313 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:42.313 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:42.313 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:42.313 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:42.313 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:42.313 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:42.313 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:42.313 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:42.313 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:42.313 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:42.313 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.313 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.313 request: 00:23:42.313 { 00:23:42.313 "name": "NVMe0", 00:23:42.313 "trtype": "tcp", 00:23:42.313 "traddr": "10.0.0.2", 00:23:42.313 "adrfam": "ipv4", 00:23:42.313 "trsvcid": "4420", 00:23:42.313 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:42.313 "hostaddr": "10.0.0.2", 00:23:42.313 "hostsvcid": "60000", 00:23:42.313 "prchk_reftag": false, 00:23:42.313 "prchk_guard": false, 00:23:42.313 "hdgst": false, 00:23:42.313 "ddgst": false, 00:23:42.313 "method": "bdev_nvme_attach_controller", 00:23:42.313 "req_id": 1 00:23:42.313 } 00:23:42.313 Got JSON-RPC error response 00:23:42.313 response: 00:23:42.313 { 00:23:42.313 "code": -114, 00:23:42.313 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:42.313 } 00:23:42.313 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:42.313 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:42.313 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:42.313 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:42.313 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:42.313 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:42.313 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:42.313 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:42.313 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:42.313 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:42.313 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:42.313 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:42.313 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:42.313 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.313 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.313 request: 00:23:42.313 { 00:23:42.313 "name": "NVMe0", 00:23:42.313 "trtype": "tcp", 00:23:42.313 "traddr": "10.0.0.2", 00:23:42.313 "adrfam": "ipv4", 00:23:42.313 "trsvcid": "4420", 00:23:42.313 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.313 "hostaddr": "10.0.0.2", 00:23:42.313 "hostsvcid": "60000", 00:23:42.313 "prchk_reftag": false, 00:23:42.313 "prchk_guard": false, 00:23:42.313 "hdgst": false, 00:23:42.313 "ddgst": false, 00:23:42.313 "multipath": "disable", 00:23:42.313 "method": "bdev_nvme_attach_controller", 00:23:42.313 "req_id": 1 00:23:42.313 } 00:23:42.313 Got JSON-RPC error response 00:23:42.313 response: 00:23:42.313 { 00:23:42.313 "code": -114, 00:23:42.313 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:23:42.313 } 00:23:42.313 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:42.313 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:42.313 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:42.313 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:42.313 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:42.313 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:42.313 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:42.313 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:42.313 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:42.313 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:42.313 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:42.574 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:42.574 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:42.574 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.574 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.574 request: 00:23:42.574 { 00:23:42.574 "name": "NVMe0", 00:23:42.574 "trtype": "tcp", 00:23:42.574 "traddr": "10.0.0.2", 00:23:42.574 "adrfam": "ipv4", 00:23:42.574 "trsvcid": "4420", 00:23:42.574 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.574 "hostaddr": "10.0.0.2", 00:23:42.574 "hostsvcid": "60000", 00:23:42.574 "prchk_reftag": false, 00:23:42.574 "prchk_guard": false, 00:23:42.574 "hdgst": false, 00:23:42.574 "ddgst": false, 00:23:42.574 "multipath": "failover", 00:23:42.574 "method": "bdev_nvme_attach_controller", 00:23:42.574 "req_id": 1 00:23:42.574 } 00:23:42.574 Got JSON-RPC error response 00:23:42.574 response: 00:23:42.574 { 00:23:42.574 "code": -114, 00:23:42.574 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:42.574 } 00:23:42.574 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:42.574 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:42.574 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:42.574 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:42.574 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:42.574 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:42.574 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.574 11:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.574 00:23:42.574 11:12:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.574 11:12:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:42.574 11:12:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.574 11:12:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.574 11:12:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.835 11:12:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:42.835 11:12:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.835 11:12:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.835 00:23:42.835 11:12:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.835 11:12:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:42.835 11:12:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:42.835 11:12:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.835 11:12:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.835 11:12:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.835 11:12:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:42.835 11:12:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:44.217 0 00:23:44.217 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:44.217 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.217 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:44.217 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.217 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1522498 00:23:44.217 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1522498 ']' 00:23:44.217 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1522498 00:23:44.217 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:23:44.217 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:44.217 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1522498 00:23:44.217 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:44.217 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:44.217 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1522498' 00:23:44.217 killing process with pid 1522498 00:23:44.217 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1522498 00:23:44.217 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1522498 00:23:44.217 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:44.218 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.218 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:44.218 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.218 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:44.218 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.218 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:44.218 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.218 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:23:44.218 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:44.218 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:44.218 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:44.218 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:23:44.218 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:23:44.218 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:44.218 [2024-07-26 11:12:00.784290] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:44.218 [2024-07-26 11:12:00.784347] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1522498 ] 00:23:44.218 EAL: No free 2048 kB hugepages reported on node 1 00:23:44.218 [2024-07-26 11:12:00.839757] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.218 [2024-07-26 11:12:00.922590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.218 [2024-07-26 11:12:02.260745] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 53603b1c-06db-4c34-bd3d-f082ef0eb852 already exists 00:23:44.218 [2024-07-26 11:12:02.260774] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:53603b1c-06db-4c34-bd3d-f082ef0eb852 alias for bdev NVMe1n1 00:23:44.218 [2024-07-26 11:12:02.260783] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:44.218 Running I/O for 1 seconds... 00:23:44.218 00:23:44.218 Latency(us) 00:23:44.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.218 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:44.218 NVMe0n1 : 1.01 22632.50 88.41 0.00 0.00 5637.58 2550.21 25188.62 00:23:44.218 =================================================================================================================== 00:23:44.218 Total : 22632.50 88.41 0.00 0.00 5637.58 2550.21 25188.62 00:23:44.218 Received shutdown signal, test time was about 1.000000 seconds 00:23:44.218 00:23:44.218 Latency(us) 00:23:44.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.218 =================================================================================================================== 00:23:44.218 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:44.218 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:44.218 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:44.218 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:44.218 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:23:44.218 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:44.218 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:23:44.218 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:44.218 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:23:44.218 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:44.218 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:44.218 rmmod nvme_tcp 00:23:44.218 rmmod nvme_fabrics 00:23:44.218 rmmod nvme_keyring 00:23:44.479 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:44.479 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:23:44.479 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:23:44.479 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1522267 ']' 00:23:44.479 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1522267 00:23:44.479 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1522267 ']' 00:23:44.479 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1522267 00:23:44.479 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:23:44.479 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:44.479 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1522267 00:23:44.479 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:44.479 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:44.479 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1522267' 00:23:44.479 killing process with pid 1522267 00:23:44.479 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1522267 00:23:44.479 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1522267 00:23:44.739 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:44.739 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:44.739 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:44.739 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:44.739 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:44.739 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.739 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:44.739 11:12:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.692 11:12:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:46.692 00:23:46.692 real 0m11.927s 00:23:46.692 user 0m17.005s 00:23:46.692 sys 0m4.837s 00:23:46.692 11:12:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:46.692 11:12:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:46.692 ************************************ 00:23:46.692 END TEST nvmf_multicontroller 00:23:46.692 ************************************ 00:23:46.692 11:12:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:46.692 11:12:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:46.692 11:12:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:46.692 11:12:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.692 ************************************ 00:23:46.692 START TEST nvmf_aer 00:23:46.692 ************************************ 00:23:46.692 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:46.953 * Looking for test storage... 00:23:46.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:46.953 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:46.953 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:46.953 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:46.953 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:46.953 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:46.953 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:46.953 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:46.953 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:46.953 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:46.953 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:46.953 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:46.953 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:46.953 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:46.953 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:46.953 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:46.953 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:46.953 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:46.953 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:46.953 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:46.953 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:46.953 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:46.953 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:46.953 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.953 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.954 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.954 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:46.954 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.954 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:23:46.954 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:46.954 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:46.954 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:46.954 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:46.954 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:46.954 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:46.954 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:46.954 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:46.954 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:46.954 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:46.954 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:46.954 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:46.954 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:46.954 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:46.954 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.954 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:46.954 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.954 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:46.954 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:46.954 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:23:46.954 11:12:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:52.238 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:52.238 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:23:52.238 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:52.238 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:52.238 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:52.238 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:52.238 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:52.238 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:23:52.238 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:52.238 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:23:52.238 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:52.239 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:52.239 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:52.239 Found net devices under 0000:86:00.0: cvl_0_0 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:52.239 Found net devices under 0000:86:00.1: cvl_0_1 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:52.239 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:52.500 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:52.500 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:52.500 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:23:52.500 00:23:52.500 --- 10.0.0.2 ping statistics --- 00:23:52.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.500 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:23:52.500 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:52.500 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:52.500 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.377 ms 00:23:52.500 00:23:52.500 --- 10.0.0.1 ping statistics --- 00:23:52.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.500 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:23:52.500 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:52.500 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:23:52.500 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:52.500 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:52.500 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:52.500 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:52.500 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:52.500 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:52.500 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:52.500 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:52.500 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:52.500 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:52.500 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:52.500 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1526829 00:23:52.500 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1526829 00:23:52.500 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:52.500 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 1526829 ']' 00:23:52.500 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.501 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:52.501 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.501 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:52.501 11:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:52.501 [2024-07-26 11:12:11.835786] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:52.501 [2024-07-26 11:12:11.835833] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.501 EAL: No free 2048 kB hugepages reported on node 1 00:23:52.501 [2024-07-26 11:12:11.896246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:52.501 [2024-07-26 11:12:11.977029] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:52.501 [2024-07-26 11:12:11.977069] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:52.501 [2024-07-26 11:12:11.977077] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:52.501 [2024-07-26 11:12:11.977082] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:52.501 [2024-07-26 11:12:11.977087] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:52.501 [2024-07-26 11:12:11.977138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:52.501 [2024-07-26 11:12:11.977237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:52.501 [2024-07-26 11:12:11.977299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:52.501 [2024-07-26 11:12:11.977301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.441 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:53.441 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:23:53.441 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:53.441 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:53.441 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.441 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.441 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:53.441 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.442 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.442 [2024-07-26 11:12:12.679439] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.442 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.442 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:53.442 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.442 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.442 Malloc0 00:23:53.442 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.442 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:53.442 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.442 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.442 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.442 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:53.442 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.442 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.442 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.442 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:53.442 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.442 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.442 [2024-07-26 11:12:12.731149] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.442 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.442 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:53.442 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.442 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.442 [ 00:23:53.442 { 00:23:53.442 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:53.442 "subtype": "Discovery", 00:23:53.442 "listen_addresses": [], 00:23:53.442 "allow_any_host": true, 00:23:53.442 "hosts": [] 00:23:53.442 }, 00:23:53.442 { 00:23:53.442 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.442 "subtype": "NVMe", 00:23:53.442 "listen_addresses": [ 00:23:53.442 { 00:23:53.442 "trtype": "TCP", 00:23:53.442 "adrfam": "IPv4", 00:23:53.442 "traddr": "10.0.0.2", 00:23:53.442 "trsvcid": "4420" 00:23:53.442 } 00:23:53.442 ], 00:23:53.442 "allow_any_host": true, 00:23:53.442 "hosts": [], 00:23:53.442 "serial_number": "SPDK00000000000001", 00:23:53.442 "model_number": "SPDK bdev Controller", 00:23:53.442 "max_namespaces": 2, 00:23:53.442 "min_cntlid": 1, 00:23:53.442 "max_cntlid": 65519, 00:23:53.442 "namespaces": [ 00:23:53.442 { 00:23:53.442 "nsid": 1, 00:23:53.442 "bdev_name": "Malloc0", 00:23:53.442 "name": "Malloc0", 00:23:53.442 "nguid": "7382DB29428E4B42A356F17500CF76D2", 00:23:53.442 "uuid": "7382db29-428e-4b42-a356-f17500cf76d2" 00:23:53.442 } 00:23:53.442 ] 00:23:53.442 } 00:23:53.442 ] 00:23:53.442 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.442 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:53.442 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:53.442 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1527059 00:23:53.442 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:53.442 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:53.442 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:23:53.442 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:53.442 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:23:53.442 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:23:53.442 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:53.442 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.442 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:53.442 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:23:53.442 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:23:53.442 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:53.702 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:53.702 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:23:53.702 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:23:53.702 11:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:53.702 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:53.702 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:53.702 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:23:53.702 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:53.702 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.702 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.702 Malloc1 00:23:53.702 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.702 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:53.702 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.702 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.702 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.702 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:53.702 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.702 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.702 [ 00:23:53.702 { 00:23:53.702 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:53.702 "subtype": "Discovery", 00:23:53.702 "listen_addresses": [], 00:23:53.702 "allow_any_host": true, 00:23:53.702 "hosts": [] 00:23:53.702 }, 00:23:53.702 { 00:23:53.702 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.702 "subtype": "NVMe", 00:23:53.702 "listen_addresses": [ 00:23:53.702 { 00:23:53.702 "trtype": "TCP", 00:23:53.702 "adrfam": "IPv4", 00:23:53.702 "traddr": "10.0.0.2", 00:23:53.702 "trsvcid": "4420" 00:23:53.702 } 00:23:53.702 ], 00:23:53.702 "allow_any_host": true, 00:23:53.702 "hosts": [], 00:23:53.702 "serial_number": "SPDK00000000000001", 00:23:53.702 "model_number": "SPDK bdev Controller", 00:23:53.702 "max_namespaces": 2, 00:23:53.702 "min_cntlid": 1, 00:23:53.702 "max_cntlid": 65519, 00:23:53.702 "namespaces": [ 00:23:53.702 { 00:23:53.702 "nsid": 1, 00:23:53.702 "bdev_name": "Malloc0", 00:23:53.702 "name": "Malloc0", 00:23:53.702 "nguid": "7382DB29428E4B42A356F17500CF76D2", 00:23:53.702 "uuid": "7382db29-428e-4b42-a356-f17500cf76d2" 00:23:53.702 }, 00:23:53.702 { 00:23:53.702 "nsid": 2, 00:23:53.702 "bdev_name": "Malloc1", 00:23:53.702 "name": "Malloc1", 00:23:53.702 "nguid": "B2BC59A187884BBEAB514C5BF8DA9098", 00:23:53.702 "uuid": "b2bc59a1-8788-4bbe-ab51-4c5bf8da9098" 00:23:53.702 } 00:23:53.702 ] 00:23:53.702 } 00:23:53.702 ] 00:23:53.702 Asynchronous Event Request test 00:23:53.702 Attaching to 10.0.0.2 00:23:53.702 Attached to 10.0.0.2 00:23:53.702 Registering asynchronous event callbacks... 00:23:53.702 Starting namespace attribute notice tests for all controllers... 00:23:53.702 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:53.702 aer_cb - Changed Namespace 00:23:53.702 Cleaning up... 00:23:53.702 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.702 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1527059 00:23:53.702 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:53.702 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.702 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.702 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.702 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:53.702 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.702 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.703 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.703 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:53.703 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.703 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:53.703 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.703 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:53.703 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:53.963 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:53.963 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:23:53.963 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:53.963 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:23:53.963 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:53.963 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:53.963 rmmod nvme_tcp 00:23:53.963 rmmod nvme_fabrics 00:23:53.963 rmmod nvme_keyring 00:23:53.963 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:53.963 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:23:53.963 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:23:53.963 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1526829 ']' 00:23:53.963 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1526829 00:23:53.963 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 1526829 ']' 00:23:53.963 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 1526829 00:23:53.963 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:23:53.963 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:53.963 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1526829 00:23:53.963 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:53.963 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:53.963 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1526829' 00:23:53.963 killing process with pid 1526829 00:23:53.963 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 1526829 00:23:53.963 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 1526829 00:23:54.223 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:54.223 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:54.223 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:54.223 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:54.223 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:54.223 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.223 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:54.223 11:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.134 11:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:56.134 00:23:56.134 real 0m9.415s 00:23:56.134 user 0m7.520s 00:23:56.134 sys 0m4.660s 00:23:56.134 11:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:56.134 11:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:56.134 ************************************ 00:23:56.134 END TEST nvmf_aer 00:23:56.134 ************************************ 00:23:56.134 11:12:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:56.134 11:12:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:56.134 11:12:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:56.134 11:12:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.134 ************************************ 00:23:56.134 START TEST nvmf_async_init 00:23:56.134 ************************************ 00:23:56.134 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:56.393 * Looking for test storage... 00:23:56.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:56.393 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:56.393 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:56.393 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:56.393 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:56.393 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:56.393 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:56.393 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:56.393 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:56.393 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:56.393 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:56.393 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:56.393 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:56.393 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:56.393 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:56.393 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:56.393 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:56.393 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:56.393 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:56.393 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:56.393 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:56.393 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:56.393 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:56.393 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.393 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.394 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.394 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:56.394 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.394 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:23:56.394 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:56.394 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:56.394 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:56.394 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:56.394 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:56.394 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:56.394 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:56.394 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:56.394 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:56.394 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:56.394 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:56.394 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:56.394 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:56.394 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:56.394 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=92554db818184c46b43f1014bcf043b6 00:23:56.394 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:56.394 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:56.394 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:56.394 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:56.394 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:56.394 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:56.394 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.394 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:56.394 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.394 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:56.394 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:56.394 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:23:56.394 11:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:01.739 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:01.739 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:24:01.739 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:01.739 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:01.739 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:01.739 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:01.739 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:01.739 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:24:01.739 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:01.739 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:24:01.739 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:24:01.739 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:24:01.739 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:24:01.739 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:24:01.739 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:24:01.739 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:01.739 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:01.739 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:01.739 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:01.739 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:01.739 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:01.739 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:01.739 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:01.739 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:01.739 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:01.739 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:01.739 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:01.739 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:01.739 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:01.739 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:01.739 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:01.739 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:01.739 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:01.739 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:01.739 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:01.739 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:01.739 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:01.739 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.739 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.739 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:01.740 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:01.740 Found net devices under 0000:86:00.0: cvl_0_0 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:01.740 Found net devices under 0000:86:00.1: cvl_0_1 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:01.740 11:12:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:01.740 11:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:01.740 11:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:01.740 11:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:01.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:01.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:24:01.740 00:24:01.740 --- 10.0.0.2 ping statistics --- 00:24:01.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.740 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:24:01.740 11:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:01.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:01.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:24:01.740 00:24:01.740 --- 10.0.0.1 ping statistics --- 00:24:01.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.740 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:24:01.740 11:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:01.740 11:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:24:01.740 11:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:01.740 11:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:01.740 11:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:01.740 11:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:01.740 11:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:01.740 11:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:01.740 11:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:01.740 11:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:01.740 11:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:01.740 11:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:01.740 11:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:01.740 11:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1530573 00:24:01.740 11:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1530573 00:24:01.740 11:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:01.740 11:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 1530573 ']' 00:24:01.740 11:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.740 11:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:01.740 11:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.740 11:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:01.740 11:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:01.740 [2024-07-26 11:12:21.159846] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:01.740 [2024-07-26 11:12:21.159890] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:01.740 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.740 [2024-07-26 11:12:21.215832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.000 [2024-07-26 11:12:21.298912] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:02.000 [2024-07-26 11:12:21.298947] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:02.000 [2024-07-26 11:12:21.298954] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:02.000 [2024-07-26 11:12:21.298960] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:02.000 [2024-07-26 11:12:21.298968] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:02.000 [2024-07-26 11:12:21.298985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.570 11:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:02.570 11:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:24:02.570 11:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:02.570 11:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:02.570 11:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:02.570 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:02.570 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:02.570 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.570 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:02.570 [2024-07-26 11:12:22.006899] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:02.570 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.570 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:02.570 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.570 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:02.570 null0 00:24:02.570 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.570 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:02.570 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.570 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:02.570 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.570 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:02.570 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.570 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:02.570 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.570 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 92554db818184c46b43f1014bcf043b6 00:24:02.570 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.570 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:02.570 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.571 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:02.571 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.571 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:02.571 [2024-07-26 11:12:22.047131] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:02.571 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.571 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:02.571 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.571 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:02.831 nvme0n1 00:24:02.831 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.831 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:02.831 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.831 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:02.831 [ 00:24:02.831 { 00:24:02.831 "name": "nvme0n1", 00:24:02.831 "aliases": [ 00:24:02.831 "92554db8-1818-4c46-b43f-1014bcf043b6" 00:24:02.831 ], 00:24:02.831 "product_name": "NVMe disk", 00:24:02.831 "block_size": 512, 00:24:02.831 "num_blocks": 2097152, 00:24:02.831 "uuid": "92554db8-1818-4c46-b43f-1014bcf043b6", 00:24:02.831 "assigned_rate_limits": { 00:24:02.831 "rw_ios_per_sec": 0, 00:24:02.831 "rw_mbytes_per_sec": 0, 00:24:02.831 "r_mbytes_per_sec": 0, 00:24:02.831 "w_mbytes_per_sec": 0 00:24:02.831 }, 00:24:02.831 "claimed": false, 00:24:02.831 "zoned": false, 00:24:02.831 "supported_io_types": { 00:24:02.831 "read": true, 00:24:02.831 "write": true, 00:24:02.831 "unmap": false, 00:24:02.831 "flush": true, 00:24:02.831 "reset": true, 00:24:02.831 "nvme_admin": true, 00:24:02.831 "nvme_io": true, 00:24:02.831 "nvme_io_md": false, 00:24:02.831 "write_zeroes": true, 00:24:02.831 "zcopy": false, 00:24:02.831 "get_zone_info": false, 00:24:02.831 "zone_management": false, 00:24:02.831 "zone_append": false, 00:24:02.831 "compare": true, 00:24:02.831 "compare_and_write": true, 00:24:02.831 "abort": true, 00:24:02.831 "seek_hole": false, 00:24:02.831 "seek_data": false, 00:24:02.831 "copy": true, 00:24:02.831 "nvme_iov_md": false 00:24:02.832 }, 00:24:02.832 "memory_domains": [ 00:24:02.832 { 00:24:02.832 "dma_device_id": "system", 00:24:02.832 "dma_device_type": 1 00:24:02.832 } 00:24:02.832 ], 00:24:02.832 "driver_specific": { 00:24:02.832 "nvme": [ 00:24:02.832 { 00:24:02.832 "trid": { 00:24:02.832 "trtype": "TCP", 00:24:02.832 "adrfam": "IPv4", 00:24:02.832 "traddr": "10.0.0.2", 00:24:02.832 "trsvcid": "4420", 00:24:02.832 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:02.832 }, 00:24:02.832 "ctrlr_data": { 00:24:02.832 "cntlid": 1, 00:24:02.832 "vendor_id": "0x8086", 00:24:02.832 "model_number": "SPDK bdev Controller", 00:24:02.832 "serial_number": "00000000000000000000", 00:24:02.832 "firmware_revision": "24.09", 00:24:02.832 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:02.832 "oacs": { 00:24:02.832 "security": 0, 00:24:02.832 "format": 0, 00:24:02.832 "firmware": 0, 00:24:02.832 "ns_manage": 0 00:24:02.832 }, 00:24:02.832 "multi_ctrlr": true, 00:24:02.832 "ana_reporting": false 00:24:02.832 }, 00:24:02.832 "vs": { 00:24:02.832 "nvme_version": "1.3" 00:24:02.832 }, 00:24:02.832 "ns_data": { 00:24:02.832 "id": 1, 00:24:02.832 "can_share": true 00:24:02.832 } 00:24:02.832 } 00:24:02.832 ], 00:24:02.832 "mp_policy": "active_passive" 00:24:02.832 } 00:24:02.832 } 00:24:02.832 ] 00:24:02.832 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.832 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:02.832 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.832 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:02.832 [2024-07-26 11:12:22.295629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:02.832 [2024-07-26 11:12:22.295687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215b390 (9): Bad file descriptor 00:24:03.092 [2024-07-26 11:12:22.427141] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:03.092 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.092 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:03.092 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.092 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:03.092 [ 00:24:03.092 { 00:24:03.092 "name": "nvme0n1", 00:24:03.092 "aliases": [ 00:24:03.092 "92554db8-1818-4c46-b43f-1014bcf043b6" 00:24:03.092 ], 00:24:03.092 "product_name": "NVMe disk", 00:24:03.092 "block_size": 512, 00:24:03.092 "num_blocks": 2097152, 00:24:03.092 "uuid": "92554db8-1818-4c46-b43f-1014bcf043b6", 00:24:03.092 "assigned_rate_limits": { 00:24:03.092 "rw_ios_per_sec": 0, 00:24:03.092 "rw_mbytes_per_sec": 0, 00:24:03.092 "r_mbytes_per_sec": 0, 00:24:03.092 "w_mbytes_per_sec": 0 00:24:03.092 }, 00:24:03.092 "claimed": false, 00:24:03.092 "zoned": false, 00:24:03.092 "supported_io_types": { 00:24:03.092 "read": true, 00:24:03.092 "write": true, 00:24:03.092 "unmap": false, 00:24:03.092 "flush": true, 00:24:03.092 "reset": true, 00:24:03.092 "nvme_admin": true, 00:24:03.092 "nvme_io": true, 00:24:03.092 "nvme_io_md": false, 00:24:03.092 "write_zeroes": true, 00:24:03.092 "zcopy": false, 00:24:03.092 "get_zone_info": false, 00:24:03.092 "zone_management": false, 00:24:03.092 "zone_append": false, 00:24:03.092 "compare": true, 00:24:03.092 "compare_and_write": true, 00:24:03.092 "abort": true, 00:24:03.092 "seek_hole": false, 00:24:03.092 "seek_data": false, 00:24:03.092 "copy": true, 00:24:03.092 "nvme_iov_md": false 00:24:03.092 }, 00:24:03.092 "memory_domains": [ 00:24:03.092 { 00:24:03.092 "dma_device_id": "system", 00:24:03.092 "dma_device_type": 1 00:24:03.092 } 00:24:03.092 ], 00:24:03.092 "driver_specific": { 00:24:03.092 "nvme": [ 00:24:03.092 { 00:24:03.092 "trid": { 00:24:03.092 "trtype": "TCP", 00:24:03.092 "adrfam": "IPv4", 00:24:03.092 "traddr": "10.0.0.2", 00:24:03.092 "trsvcid": "4420", 00:24:03.092 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:03.092 }, 00:24:03.092 "ctrlr_data": { 00:24:03.093 "cntlid": 2, 00:24:03.093 "vendor_id": "0x8086", 00:24:03.093 "model_number": "SPDK bdev Controller", 00:24:03.093 "serial_number": "00000000000000000000", 00:24:03.093 "firmware_revision": "24.09", 00:24:03.093 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:03.093 "oacs": { 00:24:03.093 "security": 0, 00:24:03.093 "format": 0, 00:24:03.093 "firmware": 0, 00:24:03.093 "ns_manage": 0 00:24:03.093 }, 00:24:03.093 "multi_ctrlr": true, 00:24:03.093 "ana_reporting": false 00:24:03.093 }, 00:24:03.093 "vs": { 00:24:03.093 "nvme_version": "1.3" 00:24:03.093 }, 00:24:03.093 "ns_data": { 00:24:03.093 "id": 1, 00:24:03.093 "can_share": true 00:24:03.093 } 00:24:03.093 } 00:24:03.093 ], 00:24:03.093 "mp_policy": "active_passive" 00:24:03.093 } 00:24:03.093 } 00:24:03.093 ] 00:24:03.093 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.093 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.093 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.093 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:03.093 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.093 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:03.093 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.sALt8LbNZI 00:24:03.093 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:03.093 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.sALt8LbNZI 00:24:03.093 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:03.093 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.093 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:03.093 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.093 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:03.093 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.093 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:03.093 [2024-07-26 11:12:22.476175] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:03.093 [2024-07-26 11:12:22.476281] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:03.093 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.093 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sALt8LbNZI 00:24:03.093 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.093 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:03.093 [2024-07-26 11:12:22.484190] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:03.093 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.093 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sALt8LbNZI 00:24:03.093 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.093 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:03.093 [2024-07-26 11:12:22.492235] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:03.093 [2024-07-26 11:12:22.492270] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:03.093 nvme0n1 00:24:03.093 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.093 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:03.093 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.093 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:03.093 [ 00:24:03.093 { 00:24:03.093 "name": "nvme0n1", 00:24:03.093 "aliases": [ 00:24:03.093 "92554db8-1818-4c46-b43f-1014bcf043b6" 00:24:03.093 ], 00:24:03.093 "product_name": "NVMe disk", 00:24:03.093 "block_size": 512, 00:24:03.093 "num_blocks": 2097152, 00:24:03.093 "uuid": "92554db8-1818-4c46-b43f-1014bcf043b6", 00:24:03.093 "assigned_rate_limits": { 00:24:03.093 "rw_ios_per_sec": 0, 00:24:03.093 "rw_mbytes_per_sec": 0, 00:24:03.093 "r_mbytes_per_sec": 0, 00:24:03.093 "w_mbytes_per_sec": 0 00:24:03.093 }, 00:24:03.093 "claimed": false, 00:24:03.093 "zoned": false, 00:24:03.093 "supported_io_types": { 00:24:03.093 "read": true, 00:24:03.093 "write": true, 00:24:03.093 "unmap": false, 00:24:03.093 "flush": true, 00:24:03.093 "reset": true, 00:24:03.093 "nvme_admin": true, 00:24:03.093 "nvme_io": true, 00:24:03.093 "nvme_io_md": false, 00:24:03.093 "write_zeroes": true, 00:24:03.093 "zcopy": false, 00:24:03.093 "get_zone_info": false, 00:24:03.093 "zone_management": false, 00:24:03.093 "zone_append": false, 00:24:03.093 "compare": true, 00:24:03.093 "compare_and_write": true, 00:24:03.093 "abort": true, 00:24:03.093 "seek_hole": false, 00:24:03.093 "seek_data": false, 00:24:03.093 "copy": true, 00:24:03.093 "nvme_iov_md": false 00:24:03.093 }, 00:24:03.093 "memory_domains": [ 00:24:03.093 { 00:24:03.093 "dma_device_id": "system", 00:24:03.093 "dma_device_type": 1 00:24:03.093 } 00:24:03.093 ], 00:24:03.093 "driver_specific": { 00:24:03.093 "nvme": [ 00:24:03.093 { 00:24:03.093 "trid": { 00:24:03.093 "trtype": "TCP", 00:24:03.093 "adrfam": "IPv4", 00:24:03.093 "traddr": "10.0.0.2", 00:24:03.093 "trsvcid": "4421", 00:24:03.093 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:03.093 }, 00:24:03.093 "ctrlr_data": { 00:24:03.093 "cntlid": 3, 00:24:03.093 "vendor_id": "0x8086", 00:24:03.093 "model_number": "SPDK bdev Controller", 00:24:03.093 "serial_number": "00000000000000000000", 00:24:03.093 "firmware_revision": "24.09", 00:24:03.093 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:03.093 "oacs": { 00:24:03.093 "security": 0, 00:24:03.093 "format": 0, 00:24:03.093 "firmware": 0, 00:24:03.093 "ns_manage": 0 00:24:03.093 }, 00:24:03.093 "multi_ctrlr": true, 00:24:03.093 "ana_reporting": false 00:24:03.093 }, 00:24:03.093 "vs": { 00:24:03.093 "nvme_version": "1.3" 00:24:03.093 }, 00:24:03.093 "ns_data": { 00:24:03.093 "id": 1, 00:24:03.093 "can_share": true 00:24:03.093 } 00:24:03.093 } 00:24:03.093 ], 00:24:03.093 "mp_policy": "active_passive" 00:24:03.093 } 00:24:03.093 } 00:24:03.093 ] 00:24:03.093 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.093 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.093 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.093 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:03.354 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.354 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.sALt8LbNZI 00:24:03.354 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:03.354 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:24:03.354 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:03.354 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:24:03.354 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:03.354 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:24:03.354 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:03.354 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:03.354 rmmod nvme_tcp 00:24:03.354 rmmod nvme_fabrics 00:24:03.354 rmmod nvme_keyring 00:24:03.354 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:03.354 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:24:03.354 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:24:03.354 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1530573 ']' 00:24:03.354 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1530573 00:24:03.354 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 1530573 ']' 00:24:03.354 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 1530573 00:24:03.354 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:24:03.354 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:03.354 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1530573 00:24:03.354 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:03.354 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:03.354 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1530573' 00:24:03.354 killing process with pid 1530573 00:24:03.354 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 1530573 00:24:03.354 [2024-07-26 11:12:22.673391] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:03.354 [2024-07-26 11:12:22.673414] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:03.354 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 1530573 00:24:03.354 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:03.354 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:03.354 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:03.354 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:03.354 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:03.354 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.354 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:03.354 11:12:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.912 11:12:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:05.912 00:24:05.912 real 0m9.280s 00:24:05.912 user 0m3.441s 00:24:05.912 sys 0m4.323s 00:24:05.912 11:12:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:05.912 11:12:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:05.912 ************************************ 00:24:05.912 END TEST nvmf_async_init 00:24:05.912 ************************************ 00:24:05.912 11:12:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:05.912 11:12:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:05.912 11:12:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:05.912 11:12:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.912 ************************************ 00:24:05.912 START TEST dma 00:24:05.912 ************************************ 00:24:05.912 11:12:24 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:05.912 * Looking for test storage... 00:24:05.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:05.912 11:12:25 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:05.912 11:12:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:24:05.912 11:12:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:05.912 11:12:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:05.912 11:12:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:05.912 11:12:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:05.912 11:12:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:05.912 11:12:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:05.912 11:12:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:05.912 11:12:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:05.912 11:12:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:05.912 11:12:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:05.912 11:12:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:05.912 11:12:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:05.912 11:12:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:05.912 11:12:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:05.912 11:12:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:05.912 11:12:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:05.912 11:12:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:05.912 11:12:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:05.912 11:12:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:05.912 11:12:25 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:05.912 11:12:25 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.912 11:12:25 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.912 11:12:25 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.912 11:12:25 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:24:05.912 11:12:25 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.912 11:12:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:24:05.912 11:12:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:05.912 11:12:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:05.912 11:12:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:05.912 11:12:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:05.912 11:12:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:24:05.913 00:24:05.913 real 0m0.116s 00:24:05.913 user 0m0.060s 00:24:05.913 sys 0m0.062s 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:24:05.913 ************************************ 00:24:05.913 END TEST dma 00:24:05.913 ************************************ 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.913 ************************************ 00:24:05.913 START TEST nvmf_identify 00:24:05.913 ************************************ 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:05.913 * Looking for test storage... 00:24:05.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:24:05.913 11:12:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:11.201 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:11.201 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:24:11.201 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:11.201 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:11.201 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:11.201 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:11.201 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:11.201 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:24:11.201 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:11.201 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:24:11.201 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:24:11.201 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:24:11.201 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:24:11.201 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:24:11.201 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:24:11.201 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:11.201 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:11.201 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:11.202 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:11.202 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:11.202 Found net devices under 0000:86:00.0: cvl_0_0 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:11.202 Found net devices under 0000:86:00.1: cvl_0_1 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:11.202 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:11.463 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:11.463 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:11.463 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:11.463 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:11.463 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:24:11.463 00:24:11.463 --- 10.0.0.2 ping statistics --- 00:24:11.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.463 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:24:11.463 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:11.463 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:11.463 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:24:11.463 00:24:11.463 --- 10.0.0.1 ping statistics --- 00:24:11.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.463 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:24:11.463 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:11.463 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:24:11.463 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:11.463 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:11.463 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:11.463 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:11.463 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:11.463 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:11.463 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:11.463 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:11.463 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:11.463 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:11.463 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1534382 00:24:11.463 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:11.463 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:11.463 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1534382 00:24:11.463 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 1534382 ']' 00:24:11.463 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:11.463 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:11.463 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:11.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:11.463 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:11.463 11:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:11.463 [2024-07-26 11:12:30.854748] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:11.463 [2024-07-26 11:12:30.854790] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:11.463 EAL: No free 2048 kB hugepages reported on node 1 00:24:11.463 [2024-07-26 11:12:30.915284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:11.723 [2024-07-26 11:12:30.997537] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:11.723 [2024-07-26 11:12:30.997574] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:11.723 [2024-07-26 11:12:30.997581] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:11.723 [2024-07-26 11:12:30.997587] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:11.723 [2024-07-26 11:12:30.997593] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:11.723 [2024-07-26 11:12:30.997635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:11.723 [2024-07-26 11:12:30.997652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:11.724 [2024-07-26 11:12:30.997739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:11.724 [2024-07-26 11:12:30.997741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:12.294 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:12.294 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:24:12.294 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:12.294 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.294 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:12.294 [2024-07-26 11:12:31.677434] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:12.294 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.294 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:12.295 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:12.295 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:12.295 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:12.295 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.295 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:12.295 Malloc0 00:24:12.295 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.295 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:12.295 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.295 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:12.295 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.295 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:12.295 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.295 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:12.295 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.295 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:12.295 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.295 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:12.295 [2024-07-26 11:12:31.761269] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:12.295 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.295 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:12.295 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.295 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:12.295 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.295 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:12.295 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.295 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:12.295 [ 00:24:12.295 { 00:24:12.295 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:12.295 "subtype": "Discovery", 00:24:12.295 "listen_addresses": [ 00:24:12.295 { 00:24:12.295 "trtype": "TCP", 00:24:12.295 "adrfam": "IPv4", 00:24:12.295 "traddr": "10.0.0.2", 00:24:12.295 "trsvcid": "4420" 00:24:12.295 } 00:24:12.295 ], 00:24:12.295 "allow_any_host": true, 00:24:12.295 "hosts": [] 00:24:12.295 }, 00:24:12.295 { 00:24:12.295 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:12.295 "subtype": "NVMe", 00:24:12.295 "listen_addresses": [ 00:24:12.295 { 00:24:12.295 "trtype": "TCP", 00:24:12.295 "adrfam": "IPv4", 00:24:12.295 "traddr": "10.0.0.2", 00:24:12.295 "trsvcid": "4420" 00:24:12.295 } 00:24:12.295 ], 00:24:12.295 "allow_any_host": true, 00:24:12.295 "hosts": [], 00:24:12.295 "serial_number": "SPDK00000000000001", 00:24:12.295 "model_number": "SPDK bdev Controller", 00:24:12.295 "max_namespaces": 32, 00:24:12.295 "min_cntlid": 1, 00:24:12.295 "max_cntlid": 65519, 00:24:12.295 "namespaces": [ 00:24:12.295 { 00:24:12.295 "nsid": 1, 00:24:12.295 "bdev_name": "Malloc0", 00:24:12.295 "name": "Malloc0", 00:24:12.295 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:12.295 "eui64": "ABCDEF0123456789", 00:24:12.295 "uuid": "951abb45-9baf-4969-81b6-f99f9cd48d48" 00:24:12.295 } 00:24:12.295 ] 00:24:12.295 } 00:24:12.295 ] 00:24:12.295 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.295 11:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:12.561 [2024-07-26 11:12:31.811585] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:12.561 [2024-07-26 11:12:31.811619] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1534513 ] 00:24:12.561 EAL: No free 2048 kB hugepages reported on node 1 00:24:12.561 [2024-07-26 11:12:31.841608] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:24:12.561 [2024-07-26 11:12:31.841654] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:12.561 [2024-07-26 11:12:31.841659] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:12.561 [2024-07-26 11:12:31.841672] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:12.561 [2024-07-26 11:12:31.841680] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:12.561 [2024-07-26 11:12:31.842338] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:24:12.561 [2024-07-26 11:12:31.842367] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xd20ec0 0 00:24:12.561 [2024-07-26 11:12:31.857051] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:12.561 [2024-07-26 11:12:31.857071] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:12.561 [2024-07-26 11:12:31.857076] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:12.561 [2024-07-26 11:12:31.857079] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:12.561 [2024-07-26 11:12:31.857118] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.561 [2024-07-26 11:12:31.857127] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.561 [2024-07-26 11:12:31.857131] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd20ec0) 00:24:12.561 [2024-07-26 11:12:31.857145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:12.561 [2024-07-26 11:12:31.857160] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda3e40, cid 0, qid 0 00:24:12.561 [2024-07-26 11:12:31.865054] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.561 [2024-07-26 11:12:31.865062] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.561 [2024-07-26 11:12:31.865065] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.561 [2024-07-26 11:12:31.865069] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda3e40) on tqpair=0xd20ec0 00:24:12.561 [2024-07-26 11:12:31.865078] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:12.561 [2024-07-26 11:12:31.865084] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:24:12.561 [2024-07-26 11:12:31.865088] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:24:12.561 [2024-07-26 11:12:31.865101] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.561 [2024-07-26 11:12:31.865104] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.561 [2024-07-26 11:12:31.865107] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd20ec0) 00:24:12.561 [2024-07-26 11:12:31.865114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.561 [2024-07-26 11:12:31.865126] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda3e40, cid 0, qid 0 00:24:12.561 [2024-07-26 11:12:31.865310] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.561 [2024-07-26 11:12:31.865322] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.561 [2024-07-26 11:12:31.865326] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.561 [2024-07-26 11:12:31.865330] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda3e40) on tqpair=0xd20ec0 00:24:12.561 [2024-07-26 11:12:31.865338] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:24:12.561 [2024-07-26 11:12:31.865348] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:24:12.561 [2024-07-26 11:12:31.865356] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.561 [2024-07-26 11:12:31.865360] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.561 [2024-07-26 11:12:31.865363] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd20ec0) 00:24:12.561 [2024-07-26 11:12:31.865371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.561 [2024-07-26 11:12:31.865384] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda3e40, cid 0, qid 0 00:24:12.561 [2024-07-26 11:12:31.865548] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.561 [2024-07-26 11:12:31.865558] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.561 [2024-07-26 11:12:31.865561] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.561 [2024-07-26 11:12:31.865565] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda3e40) on tqpair=0xd20ec0 00:24:12.561 [2024-07-26 11:12:31.865570] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:24:12.561 [2024-07-26 11:12:31.865578] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:24:12.561 [2024-07-26 11:12:31.865585] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.561 [2024-07-26 11:12:31.865589] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.561 [2024-07-26 11:12:31.865595] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd20ec0) 00:24:12.561 [2024-07-26 11:12:31.865602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.561 [2024-07-26 11:12:31.865614] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda3e40, cid 0, qid 0 00:24:12.561 [2024-07-26 11:12:31.865776] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.561 [2024-07-26 11:12:31.865786] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.561 [2024-07-26 11:12:31.865789] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.561 [2024-07-26 11:12:31.865792] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda3e40) on tqpair=0xd20ec0 00:24:12.561 [2024-07-26 11:12:31.865798] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:12.561 [2024-07-26 11:12:31.865809] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.561 [2024-07-26 11:12:31.865812] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.561 [2024-07-26 11:12:31.865816] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd20ec0) 00:24:12.561 [2024-07-26 11:12:31.865822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.561 [2024-07-26 11:12:31.865834] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda3e40, cid 0, qid 0 00:24:12.561 [2024-07-26 11:12:31.865995] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.561 [2024-07-26 11:12:31.866004] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.561 [2024-07-26 11:12:31.866008] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.561 [2024-07-26 11:12:31.866011] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda3e40) on tqpair=0xd20ec0 00:24:12.561 [2024-07-26 11:12:31.866016] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:24:12.561 [2024-07-26 11:12:31.866021] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:24:12.561 [2024-07-26 11:12:31.866029] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:12.561 [2024-07-26 11:12:31.866134] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:24:12.561 [2024-07-26 11:12:31.866139] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:12.561 [2024-07-26 11:12:31.866147] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.561 [2024-07-26 11:12:31.866151] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.561 [2024-07-26 11:12:31.866154] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd20ec0) 00:24:12.561 [2024-07-26 11:12:31.866160] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.561 [2024-07-26 11:12:31.866173] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda3e40, cid 0, qid 0 00:24:12.561 [2024-07-26 11:12:31.866335] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.561 [2024-07-26 11:12:31.866345] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.561 [2024-07-26 11:12:31.866348] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.561 [2024-07-26 11:12:31.866351] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda3e40) on tqpair=0xd20ec0 00:24:12.561 [2024-07-26 11:12:31.866356] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:12.561 [2024-07-26 11:12:31.866367] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.561 [2024-07-26 11:12:31.866374] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.561 [2024-07-26 11:12:31.866377] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd20ec0) 00:24:12.561 [2024-07-26 11:12:31.866383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.561 [2024-07-26 11:12:31.866395] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda3e40, cid 0, qid 0 00:24:12.561 [2024-07-26 11:12:31.866551] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.561 [2024-07-26 11:12:31.866561] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.561 [2024-07-26 11:12:31.866564] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.561 [2024-07-26 11:12:31.866567] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda3e40) on tqpair=0xd20ec0 00:24:12.562 [2024-07-26 11:12:31.866571] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:12.562 [2024-07-26 11:12:31.866576] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:24:12.562 [2024-07-26 11:12:31.866584] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:24:12.562 [2024-07-26 11:12:31.866595] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:24:12.562 [2024-07-26 11:12:31.866605] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.562 [2024-07-26 11:12:31.866609] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd20ec0) 00:24:12.562 [2024-07-26 11:12:31.866615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.562 [2024-07-26 11:12:31.866628] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda3e40, cid 0, qid 0 00:24:12.562 [2024-07-26 11:12:31.866828] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:12.562 [2024-07-26 11:12:31.866838] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:12.562 [2024-07-26 11:12:31.866841] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:12.562 [2024-07-26 11:12:31.866845] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd20ec0): datao=0, datal=4096, cccid=0 00:24:12.562 [2024-07-26 11:12:31.866849] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xda3e40) on tqpair(0xd20ec0): expected_datao=0, payload_size=4096 00:24:12.562 [2024-07-26 11:12:31.866854] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.562 [2024-07-26 11:12:31.866861] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:12.562 [2024-07-26 11:12:31.866865] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:12.562 [2024-07-26 11:12:31.867181] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.562 [2024-07-26 11:12:31.867187] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.562 [2024-07-26 11:12:31.867190] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.562 [2024-07-26 11:12:31.867193] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda3e40) on tqpair=0xd20ec0 00:24:12.562 [2024-07-26 11:12:31.867201] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:24:12.562 [2024-07-26 11:12:31.867205] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:24:12.562 [2024-07-26 11:12:31.867209] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:24:12.562 [2024-07-26 11:12:31.867214] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:24:12.562 [2024-07-26 11:12:31.867218] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:24:12.562 [2024-07-26 11:12:31.867225] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:24:12.562 [2024-07-26 11:12:31.867233] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:24:12.562 [2024-07-26 11:12:31.867242] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.562 [2024-07-26 11:12:31.867246] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.562 [2024-07-26 11:12:31.867249] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd20ec0) 00:24:12.562 [2024-07-26 11:12:31.867256] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:12.562 [2024-07-26 11:12:31.867268] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda3e40, cid 0, qid 0 00:24:12.562 [2024-07-26 11:12:31.867432] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.562 [2024-07-26 11:12:31.867442] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.562 [2024-07-26 11:12:31.867445] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.562 [2024-07-26 11:12:31.867449] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda3e40) on tqpair=0xd20ec0 00:24:12.562 [2024-07-26 11:12:31.867457] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.562 [2024-07-26 11:12:31.867460] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.562 [2024-07-26 11:12:31.867463] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd20ec0) 00:24:12.562 [2024-07-26 11:12:31.867469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.562 [2024-07-26 11:12:31.867475] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.562 [2024-07-26 11:12:31.867478] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.562 [2024-07-26 11:12:31.867481] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xd20ec0) 00:24:12.562 [2024-07-26 11:12:31.867486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.562 [2024-07-26 11:12:31.867491] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.562 [2024-07-26 11:12:31.867495] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.562 [2024-07-26 11:12:31.867498] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xd20ec0) 00:24:12.562 [2024-07-26 11:12:31.867502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.562 [2024-07-26 11:12:31.867507] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.562 [2024-07-26 11:12:31.867511] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.562 [2024-07-26 11:12:31.867514] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd20ec0) 00:24:12.562 [2024-07-26 11:12:31.867519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.562 [2024-07-26 11:12:31.867523] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:24:12.562 [2024-07-26 11:12:31.867535] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:12.562 [2024-07-26 11:12:31.867541] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.562 [2024-07-26 11:12:31.867545] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd20ec0) 00:24:12.562 [2024-07-26 11:12:31.867550] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.562 [2024-07-26 11:12:31.867567] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda3e40, cid 0, qid 0 00:24:12.562 [2024-07-26 11:12:31.867572] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda3fc0, cid 1, qid 0 00:24:12.562 [2024-07-26 11:12:31.867576] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda4140, cid 2, qid 0 00:24:12.562 [2024-07-26 11:12:31.867580] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda42c0, cid 3, qid 0 00:24:12.562 [2024-07-26 11:12:31.867584] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda4440, cid 4, qid 0 00:24:12.562 [2024-07-26 11:12:31.867783] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.562 [2024-07-26 11:12:31.867793] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.562 [2024-07-26 11:12:31.867796] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.562 [2024-07-26 11:12:31.867799] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda4440) on tqpair=0xd20ec0 00:24:12.562 [2024-07-26 11:12:31.867804] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:24:12.562 [2024-07-26 11:12:31.867809] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:24:12.562 [2024-07-26 11:12:31.867821] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.562 [2024-07-26 11:12:31.867824] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd20ec0) 00:24:12.562 [2024-07-26 11:12:31.867831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.562 [2024-07-26 11:12:31.867844] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda4440, cid 4, qid 0 00:24:12.562 [2024-07-26 11:12:31.868016] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:12.562 [2024-07-26 11:12:31.868026] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:12.562 [2024-07-26 11:12:31.868029] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:12.562 [2024-07-26 11:12:31.868032] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd20ec0): datao=0, datal=4096, cccid=4 00:24:12.562 [2024-07-26 11:12:31.868036] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xda4440) on tqpair(0xd20ec0): expected_datao=0, payload_size=4096 00:24:12.562 [2024-07-26 11:12:31.868040] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.562 [2024-07-26 11:12:31.868329] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:12.562 [2024-07-26 11:12:31.868333] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:12.562 [2024-07-26 11:12:31.910051] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.562 [2024-07-26 11:12:31.910060] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.562 [2024-07-26 11:12:31.910063] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.562 [2024-07-26 11:12:31.910067] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda4440) on tqpair=0xd20ec0 00:24:12.562 [2024-07-26 11:12:31.910080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:24:12.562 [2024-07-26 11:12:31.910103] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.562 [2024-07-26 11:12:31.910108] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd20ec0) 00:24:12.562 [2024-07-26 11:12:31.910114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.562 [2024-07-26 11:12:31.910120] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.562 [2024-07-26 11:12:31.910123] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.562 [2024-07-26 11:12:31.910126] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd20ec0) 00:24:12.562 [2024-07-26 11:12:31.910131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.562 [2024-07-26 11:12:31.910150] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda4440, cid 4, qid 0 00:24:12.562 [2024-07-26 11:12:31.910155] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda45c0, cid 5, qid 0 00:24:12.562 [2024-07-26 11:12:31.910347] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:12.562 [2024-07-26 11:12:31.910358] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:12.563 [2024-07-26 11:12:31.910361] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:12.563 [2024-07-26 11:12:31.910365] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd20ec0): datao=0, datal=1024, cccid=4 00:24:12.563 [2024-07-26 11:12:31.910368] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xda4440) on tqpair(0xd20ec0): expected_datao=0, payload_size=1024 00:24:12.563 [2024-07-26 11:12:31.910372] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.563 [2024-07-26 11:12:31.910378] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:12.563 [2024-07-26 11:12:31.910382] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:12.563 [2024-07-26 11:12:31.910387] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.563 [2024-07-26 11:12:31.910392] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.563 [2024-07-26 11:12:31.910395] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.563 [2024-07-26 11:12:31.910398] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda45c0) on tqpair=0xd20ec0 00:24:12.563 [2024-07-26 11:12:31.952284] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.563 [2024-07-26 11:12:31.952299] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.563 [2024-07-26 11:12:31.952302] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.563 [2024-07-26 11:12:31.952306] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda4440) on tqpair=0xd20ec0 00:24:12.563 [2024-07-26 11:12:31.952325] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.563 [2024-07-26 11:12:31.952329] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd20ec0) 00:24:12.563 [2024-07-26 11:12:31.952336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.563 [2024-07-26 11:12:31.952355] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda4440, cid 4, qid 0 00:24:12.563 [2024-07-26 11:12:31.952619] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:12.563 [2024-07-26 11:12:31.952630] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:12.563 [2024-07-26 11:12:31.952633] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:12.563 [2024-07-26 11:12:31.952636] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd20ec0): datao=0, datal=3072, cccid=4 00:24:12.563 [2024-07-26 11:12:31.952640] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xda4440) on tqpair(0xd20ec0): expected_datao=0, payload_size=3072 00:24:12.563 [2024-07-26 11:12:31.952644] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.563 [2024-07-26 11:12:31.952650] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:12.563 [2024-07-26 11:12:31.952654] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:12.563 [2024-07-26 11:12:31.952972] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.563 [2024-07-26 11:12:31.952978] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.563 [2024-07-26 11:12:31.952981] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.563 [2024-07-26 11:12:31.952984] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda4440) on tqpair=0xd20ec0 00:24:12.563 [2024-07-26 11:12:31.952993] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.563 [2024-07-26 11:12:31.952997] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd20ec0) 00:24:12.563 [2024-07-26 11:12:31.953003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.563 [2024-07-26 11:12:31.953022] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda4440, cid 4, qid 0 00:24:12.563 [2024-07-26 11:12:31.953191] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:12.563 [2024-07-26 11:12:31.953202] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:12.563 [2024-07-26 11:12:31.953205] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:12.563 [2024-07-26 11:12:31.953208] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd20ec0): datao=0, datal=8, cccid=4 00:24:12.563 [2024-07-26 11:12:31.953212] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xda4440) on tqpair(0xd20ec0): expected_datao=0, payload_size=8 00:24:12.563 [2024-07-26 11:12:31.953216] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.563 [2024-07-26 11:12:31.953222] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:12.563 [2024-07-26 11:12:31.953225] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:12.563 [2024-07-26 11:12:31.994487] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.563 [2024-07-26 11:12:31.994497] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.563 [2024-07-26 11:12:31.994500] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.563 [2024-07-26 11:12:31.994504] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda4440) on tqpair=0xd20ec0 00:24:12.563 ===================================================== 00:24:12.563 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:12.563 ===================================================== 00:24:12.563 Controller Capabilities/Features 00:24:12.563 ================================ 00:24:12.563 Vendor ID: 0000 00:24:12.563 Subsystem Vendor ID: 0000 00:24:12.563 Serial Number: .................... 00:24:12.563 Model Number: ........................................ 00:24:12.563 Firmware Version: 24.09 00:24:12.563 Recommended Arb Burst: 0 00:24:12.563 IEEE OUI Identifier: 00 00 00 00:24:12.563 Multi-path I/O 00:24:12.563 May have multiple subsystem ports: No 00:24:12.563 May have multiple controllers: No 00:24:12.563 Associated with SR-IOV VF: No 00:24:12.563 Max Data Transfer Size: 131072 00:24:12.563 Max Number of Namespaces: 0 00:24:12.563 Max Number of I/O Queues: 1024 00:24:12.563 NVMe Specification Version (VS): 1.3 00:24:12.563 NVMe Specification Version (Identify): 1.3 00:24:12.563 Maximum Queue Entries: 128 00:24:12.563 Contiguous Queues Required: Yes 00:24:12.563 Arbitration Mechanisms Supported 00:24:12.563 Weighted Round Robin: Not Supported 00:24:12.563 Vendor Specific: Not Supported 00:24:12.563 Reset Timeout: 15000 ms 00:24:12.563 Doorbell Stride: 4 bytes 00:24:12.563 NVM Subsystem Reset: Not Supported 00:24:12.563 Command Sets Supported 00:24:12.563 NVM Command Set: Supported 00:24:12.563 Boot Partition: Not Supported 00:24:12.563 Memory Page Size Minimum: 4096 bytes 00:24:12.563 Memory Page Size Maximum: 4096 bytes 00:24:12.563 Persistent Memory Region: Not Supported 00:24:12.563 Optional Asynchronous Events Supported 00:24:12.563 Namespace Attribute Notices: Not Supported 00:24:12.563 Firmware Activation Notices: Not Supported 00:24:12.563 ANA Change Notices: Not Supported 00:24:12.563 PLE Aggregate Log Change Notices: Not Supported 00:24:12.563 LBA Status Info Alert Notices: Not Supported 00:24:12.563 EGE Aggregate Log Change Notices: Not Supported 00:24:12.563 Normal NVM Subsystem Shutdown event: Not Supported 00:24:12.563 Zone Descriptor Change Notices: Not Supported 00:24:12.563 Discovery Log Change Notices: Supported 00:24:12.563 Controller Attributes 00:24:12.563 128-bit Host Identifier: Not Supported 00:24:12.563 Non-Operational Permissive Mode: Not Supported 00:24:12.563 NVM Sets: Not Supported 00:24:12.563 Read Recovery Levels: Not Supported 00:24:12.563 Endurance Groups: Not Supported 00:24:12.563 Predictable Latency Mode: Not Supported 00:24:12.563 Traffic Based Keep ALive: Not Supported 00:24:12.563 Namespace Granularity: Not Supported 00:24:12.563 SQ Associations: Not Supported 00:24:12.563 UUID List: Not Supported 00:24:12.563 Multi-Domain Subsystem: Not Supported 00:24:12.563 Fixed Capacity Management: Not Supported 00:24:12.563 Variable Capacity Management: Not Supported 00:24:12.563 Delete Endurance Group: Not Supported 00:24:12.563 Delete NVM Set: Not Supported 00:24:12.563 Extended LBA Formats Supported: Not Supported 00:24:12.563 Flexible Data Placement Supported: Not Supported 00:24:12.563 00:24:12.563 Controller Memory Buffer Support 00:24:12.563 ================================ 00:24:12.563 Supported: No 00:24:12.563 00:24:12.563 Persistent Memory Region Support 00:24:12.563 ================================ 00:24:12.563 Supported: No 00:24:12.563 00:24:12.563 Admin Command Set Attributes 00:24:12.563 ============================ 00:24:12.563 Security Send/Receive: Not Supported 00:24:12.563 Format NVM: Not Supported 00:24:12.563 Firmware Activate/Download: Not Supported 00:24:12.563 Namespace Management: Not Supported 00:24:12.563 Device Self-Test: Not Supported 00:24:12.563 Directives: Not Supported 00:24:12.563 NVMe-MI: Not Supported 00:24:12.563 Virtualization Management: Not Supported 00:24:12.563 Doorbell Buffer Config: Not Supported 00:24:12.563 Get LBA Status Capability: Not Supported 00:24:12.563 Command & Feature Lockdown Capability: Not Supported 00:24:12.563 Abort Command Limit: 1 00:24:12.563 Async Event Request Limit: 4 00:24:12.563 Number of Firmware Slots: N/A 00:24:12.563 Firmware Slot 1 Read-Only: N/A 00:24:12.563 Firmware Activation Without Reset: N/A 00:24:12.563 Multiple Update Detection Support: N/A 00:24:12.563 Firmware Update Granularity: No Information Provided 00:24:12.563 Per-Namespace SMART Log: No 00:24:12.563 Asymmetric Namespace Access Log Page: Not Supported 00:24:12.563 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:12.563 Command Effects Log Page: Not Supported 00:24:12.563 Get Log Page Extended Data: Supported 00:24:12.563 Telemetry Log Pages: Not Supported 00:24:12.563 Persistent Event Log Pages: Not Supported 00:24:12.563 Supported Log Pages Log Page: May Support 00:24:12.563 Commands Supported & Effects Log Page: Not Supported 00:24:12.563 Feature Identifiers & Effects Log Page:May Support 00:24:12.563 NVMe-MI Commands & Effects Log Page: May Support 00:24:12.564 Data Area 4 for Telemetry Log: Not Supported 00:24:12.564 Error Log Page Entries Supported: 128 00:24:12.564 Keep Alive: Not Supported 00:24:12.564 00:24:12.564 NVM Command Set Attributes 00:24:12.564 ========================== 00:24:12.564 Submission Queue Entry Size 00:24:12.564 Max: 1 00:24:12.564 Min: 1 00:24:12.564 Completion Queue Entry Size 00:24:12.564 Max: 1 00:24:12.564 Min: 1 00:24:12.564 Number of Namespaces: 0 00:24:12.564 Compare Command: Not Supported 00:24:12.564 Write Uncorrectable Command: Not Supported 00:24:12.564 Dataset Management Command: Not Supported 00:24:12.564 Write Zeroes Command: Not Supported 00:24:12.564 Set Features Save Field: Not Supported 00:24:12.564 Reservations: Not Supported 00:24:12.564 Timestamp: Not Supported 00:24:12.564 Copy: Not Supported 00:24:12.564 Volatile Write Cache: Not Present 00:24:12.564 Atomic Write Unit (Normal): 1 00:24:12.564 Atomic Write Unit (PFail): 1 00:24:12.564 Atomic Compare & Write Unit: 1 00:24:12.564 Fused Compare & Write: Supported 00:24:12.564 Scatter-Gather List 00:24:12.564 SGL Command Set: Supported 00:24:12.564 SGL Keyed: Supported 00:24:12.564 SGL Bit Bucket Descriptor: Not Supported 00:24:12.564 SGL Metadata Pointer: Not Supported 00:24:12.564 Oversized SGL: Not Supported 00:24:12.564 SGL Metadata Address: Not Supported 00:24:12.564 SGL Offset: Supported 00:24:12.564 Transport SGL Data Block: Not Supported 00:24:12.564 Replay Protected Memory Block: Not Supported 00:24:12.564 00:24:12.564 Firmware Slot Information 00:24:12.564 ========================= 00:24:12.564 Active slot: 0 00:24:12.564 00:24:12.564 00:24:12.564 Error Log 00:24:12.564 ========= 00:24:12.564 00:24:12.564 Active Namespaces 00:24:12.564 ================= 00:24:12.564 Discovery Log Page 00:24:12.564 ================== 00:24:12.564 Generation Counter: 2 00:24:12.564 Number of Records: 2 00:24:12.564 Record Format: 0 00:24:12.564 00:24:12.564 Discovery Log Entry 0 00:24:12.564 ---------------------- 00:24:12.564 Transport Type: 3 (TCP) 00:24:12.564 Address Family: 1 (IPv4) 00:24:12.564 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:12.564 Entry Flags: 00:24:12.564 Duplicate Returned Information: 1 00:24:12.564 Explicit Persistent Connection Support for Discovery: 1 00:24:12.564 Transport Requirements: 00:24:12.564 Secure Channel: Not Required 00:24:12.564 Port ID: 0 (0x0000) 00:24:12.564 Controller ID: 65535 (0xffff) 00:24:12.564 Admin Max SQ Size: 128 00:24:12.564 Transport Service Identifier: 4420 00:24:12.564 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:12.564 Transport Address: 10.0.0.2 00:24:12.564 Discovery Log Entry 1 00:24:12.564 ---------------------- 00:24:12.564 Transport Type: 3 (TCP) 00:24:12.564 Address Family: 1 (IPv4) 00:24:12.564 Subsystem Type: 2 (NVM Subsystem) 00:24:12.564 Entry Flags: 00:24:12.564 Duplicate Returned Information: 0 00:24:12.564 Explicit Persistent Connection Support for Discovery: 0 00:24:12.564 Transport Requirements: 00:24:12.564 Secure Channel: Not Required 00:24:12.564 Port ID: 0 (0x0000) 00:24:12.564 Controller ID: 65535 (0xffff) 00:24:12.564 Admin Max SQ Size: 128 00:24:12.564 Transport Service Identifier: 4420 00:24:12.564 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:12.564 Transport Address: 10.0.0.2 [2024-07-26 11:12:31.994581] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:24:12.564 [2024-07-26 11:12:31.994591] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda3e40) on tqpair=0xd20ec0 00:24:12.564 [2024-07-26 11:12:31.994597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.564 [2024-07-26 11:12:31.994602] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda3fc0) on tqpair=0xd20ec0 00:24:12.564 [2024-07-26 11:12:31.994606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.564 [2024-07-26 11:12:31.994610] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda4140) on tqpair=0xd20ec0 00:24:12.564 [2024-07-26 11:12:31.994614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.564 [2024-07-26 11:12:31.994618] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda42c0) on tqpair=0xd20ec0 00:24:12.564 [2024-07-26 11:12:31.994622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.564 [2024-07-26 11:12:31.994632] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.564 [2024-07-26 11:12:31.994636] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.564 [2024-07-26 11:12:31.994639] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd20ec0) 00:24:12.564 [2024-07-26 11:12:31.994646] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.564 [2024-07-26 11:12:31.994659] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda42c0, cid 3, qid 0 00:24:12.564 [2024-07-26 11:12:31.994813] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.564 [2024-07-26 11:12:31.994823] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.564 [2024-07-26 11:12:31.994826] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.564 [2024-07-26 11:12:31.994830] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda42c0) on tqpair=0xd20ec0 00:24:12.564 [2024-07-26 11:12:31.994837] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.564 [2024-07-26 11:12:31.994841] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.564 [2024-07-26 11:12:31.994844] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd20ec0) 00:24:12.564 [2024-07-26 11:12:31.994851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.564 [2024-07-26 11:12:31.994870] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda42c0, cid 3, qid 0 00:24:12.564 [2024-07-26 11:12:31.995035] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.564 [2024-07-26 11:12:31.995050] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.564 [2024-07-26 11:12:31.995054] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.564 [2024-07-26 11:12:31.995058] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda42c0) on tqpair=0xd20ec0 00:24:12.564 [2024-07-26 11:12:31.995062] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:24:12.564 [2024-07-26 11:12:31.995066] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:24:12.564 [2024-07-26 11:12:31.995077] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.564 [2024-07-26 11:12:31.995081] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.564 [2024-07-26 11:12:31.995084] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd20ec0) 00:24:12.564 [2024-07-26 11:12:31.995090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.564 [2024-07-26 11:12:31.995103] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda42c0, cid 3, qid 0 00:24:12.564 [2024-07-26 11:12:31.995265] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.564 [2024-07-26 11:12:31.995274] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.564 [2024-07-26 11:12:31.995277] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.564 [2024-07-26 11:12:31.995281] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda42c0) on tqpair=0xd20ec0 00:24:12.564 [2024-07-26 11:12:31.995292] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.564 [2024-07-26 11:12:31.995296] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.564 [2024-07-26 11:12:31.995299] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd20ec0) 00:24:12.564 [2024-07-26 11:12:31.995305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.564 [2024-07-26 11:12:31.995317] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda42c0, cid 3, qid 0 00:24:12.564 [2024-07-26 11:12:31.995482] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.564 [2024-07-26 11:12:31.995492] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.564 [2024-07-26 11:12:31.995495] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.564 [2024-07-26 11:12:31.995498] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda42c0) on tqpair=0xd20ec0 00:24:12.564 [2024-07-26 11:12:31.995509] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.564 [2024-07-26 11:12:31.995513] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.564 [2024-07-26 11:12:31.995516] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd20ec0) 00:24:12.564 [2024-07-26 11:12:31.995522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.564 [2024-07-26 11:12:31.995534] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda42c0, cid 3, qid 0 00:24:12.564 [2024-07-26 11:12:31.995700] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.564 [2024-07-26 11:12:31.995710] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.564 [2024-07-26 11:12:31.995712] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.564 [2024-07-26 11:12:31.995716] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda42c0) on tqpair=0xd20ec0 00:24:12.564 [2024-07-26 11:12:31.995727] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.564 [2024-07-26 11:12:31.995731] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.565 [2024-07-26 11:12:31.995738] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd20ec0) 00:24:12.565 [2024-07-26 11:12:31.995744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.565 [2024-07-26 11:12:31.995757] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda42c0, cid 3, qid 0 00:24:12.565 [2024-07-26 11:12:31.995914] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.565 [2024-07-26 11:12:31.995923] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.565 [2024-07-26 11:12:31.995926] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.565 [2024-07-26 11:12:31.995930] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda42c0) on tqpair=0xd20ec0 00:24:12.565 [2024-07-26 11:12:31.995941] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.565 [2024-07-26 11:12:31.995945] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.565 [2024-07-26 11:12:31.995948] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd20ec0) 00:24:12.565 [2024-07-26 11:12:31.995954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.565 [2024-07-26 11:12:31.995966] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda42c0, cid 3, qid 0 00:24:12.565 [2024-07-26 11:12:31.996138] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.565 [2024-07-26 11:12:31.996148] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.565 [2024-07-26 11:12:31.996151] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.565 [2024-07-26 11:12:31.996154] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda42c0) on tqpair=0xd20ec0 00:24:12.565 [2024-07-26 11:12:31.996165] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.565 [2024-07-26 11:12:31.996169] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.565 [2024-07-26 11:12:31.996172] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd20ec0) 00:24:12.565 [2024-07-26 11:12:31.996179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.565 [2024-07-26 11:12:31.996191] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda42c0, cid 3, qid 0 00:24:12.565 [2024-07-26 11:12:31.996347] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.565 [2024-07-26 11:12:31.996357] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.565 [2024-07-26 11:12:31.996361] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.565 [2024-07-26 11:12:31.996364] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda42c0) on tqpair=0xd20ec0 00:24:12.565 [2024-07-26 11:12:31.996375] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.565 [2024-07-26 11:12:31.996379] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.565 [2024-07-26 11:12:31.996382] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd20ec0) 00:24:12.565 [2024-07-26 11:12:31.996388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.565 [2024-07-26 11:12:31.996401] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda42c0, cid 3, qid 0 00:24:12.565 [2024-07-26 11:12:31.996556] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.565 [2024-07-26 11:12:31.996566] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.565 [2024-07-26 11:12:31.996569] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.565 [2024-07-26 11:12:31.996573] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda42c0) on tqpair=0xd20ec0 00:24:12.565 [2024-07-26 11:12:31.996584] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.565 [2024-07-26 11:12:31.996587] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.565 [2024-07-26 11:12:31.996590] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd20ec0) 00:24:12.565 [2024-07-26 11:12:31.996600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.565 [2024-07-26 11:12:31.996612] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda42c0, cid 3, qid 0 00:24:12.565 [2024-07-26 11:12:31.996776] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.565 [2024-07-26 11:12:31.996786] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.565 [2024-07-26 11:12:31.996789] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.565 [2024-07-26 11:12:31.996793] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda42c0) on tqpair=0xd20ec0 00:24:12.565 [2024-07-26 11:12:31.996803] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.565 [2024-07-26 11:12:31.996807] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.565 [2024-07-26 11:12:31.996810] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd20ec0) 00:24:12.565 [2024-07-26 11:12:31.996816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.565 [2024-07-26 11:12:31.996828] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda42c0, cid 3, qid 0 00:24:12.565 [2024-07-26 11:12:31.996987] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.565 [2024-07-26 11:12:31.996997] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.565 [2024-07-26 11:12:31.997000] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.565 [2024-07-26 11:12:31.997003] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda42c0) on tqpair=0xd20ec0 00:24:12.565 [2024-07-26 11:12:31.997014] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.565 [2024-07-26 11:12:31.997018] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.565 [2024-07-26 11:12:31.997021] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd20ec0) 00:24:12.565 [2024-07-26 11:12:31.997027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.565 [2024-07-26 11:12:31.997039] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda42c0, cid 3, qid 0 00:24:12.565 [2024-07-26 11:12:31.997199] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.565 [2024-07-26 11:12:31.997209] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.565 [2024-07-26 11:12:31.997212] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.565 [2024-07-26 11:12:31.997215] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda42c0) on tqpair=0xd20ec0 00:24:12.565 [2024-07-26 11:12:31.997226] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.565 [2024-07-26 11:12:31.997230] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.565 [2024-07-26 11:12:31.997233] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd20ec0) 00:24:12.565 [2024-07-26 11:12:31.997239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.565 [2024-07-26 11:12:31.997251] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda42c0, cid 3, qid 0 00:24:12.565 [2024-07-26 11:12:31.997407] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.565 [2024-07-26 11:12:31.997417] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.565 [2024-07-26 11:12:31.997420] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.565 [2024-07-26 11:12:31.997423] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda42c0) on tqpair=0xd20ec0 00:24:12.565 [2024-07-26 11:12:31.997434] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.565 [2024-07-26 11:12:31.997438] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.565 [2024-07-26 11:12:31.997441] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd20ec0) 00:24:12.565 [2024-07-26 11:12:31.997447] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.565 [2024-07-26 11:12:31.997462] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda42c0, cid 3, qid 0 00:24:12.565 [2024-07-26 11:12:31.997622] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.565 [2024-07-26 11:12:31.997632] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.565 [2024-07-26 11:12:31.997635] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.565 [2024-07-26 11:12:31.997638] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda42c0) on tqpair=0xd20ec0 00:24:12.565 [2024-07-26 11:12:31.997650] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.565 [2024-07-26 11:12:31.997653] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.565 [2024-07-26 11:12:31.997656] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd20ec0) 00:24:12.565 [2024-07-26 11:12:31.997663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.565 [2024-07-26 11:12:31.997674] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda42c0, cid 3, qid 0 00:24:12.565 [2024-07-26 11:12:31.997833] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.565 [2024-07-26 11:12:31.997843] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.565 [2024-07-26 11:12:31.997846] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.566 [2024-07-26 11:12:31.997850] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda42c0) on tqpair=0xd20ec0 00:24:12.566 [2024-07-26 11:12:31.997861] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.566 [2024-07-26 11:12:31.997864] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.566 [2024-07-26 11:12:31.997868] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd20ec0) 00:24:12.566 [2024-07-26 11:12:31.997874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.566 [2024-07-26 11:12:31.997886] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda42c0, cid 3, qid 0 00:24:12.566 [2024-07-26 11:12:31.998047] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.566 [2024-07-26 11:12:31.998057] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.566 [2024-07-26 11:12:31.998060] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.566 [2024-07-26 11:12:31.998064] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda42c0) on tqpair=0xd20ec0 00:24:12.566 [2024-07-26 11:12:31.998075] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.566 [2024-07-26 11:12:31.998078] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.566 [2024-07-26 11:12:31.998082] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd20ec0) 00:24:12.566 [2024-07-26 11:12:31.998088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.566 [2024-07-26 11:12:31.998100] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda42c0, cid 3, qid 0 00:24:12.566 [2024-07-26 11:12:31.998258] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.566 [2024-07-26 11:12:31.998268] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.566 [2024-07-26 11:12:31.998271] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.566 [2024-07-26 11:12:31.998274] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda42c0) on tqpair=0xd20ec0 00:24:12.566 [2024-07-26 11:12:31.998285] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.566 [2024-07-26 11:12:31.998289] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.566 [2024-07-26 11:12:31.998292] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd20ec0) 00:24:12.566 [2024-07-26 11:12:31.998298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.566 [2024-07-26 11:12:31.998310] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda42c0, cid 3, qid 0 00:24:12.566 [2024-07-26 11:12:31.998472] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.566 [2024-07-26 11:12:31.998482] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.566 [2024-07-26 11:12:31.998485] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.566 [2024-07-26 11:12:31.998489] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda42c0) on tqpair=0xd20ec0 00:24:12.566 [2024-07-26 11:12:31.998500] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.566 [2024-07-26 11:12:31.998503] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.566 [2024-07-26 11:12:31.998506] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd20ec0) 00:24:12.566 [2024-07-26 11:12:31.998512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.566 [2024-07-26 11:12:31.998524] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda42c0, cid 3, qid 0 00:24:12.566 [2024-07-26 11:12:31.998680] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.566 [2024-07-26 11:12:31.998690] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.566 [2024-07-26 11:12:31.998693] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.566 [2024-07-26 11:12:31.998696] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda42c0) on tqpair=0xd20ec0 00:24:12.566 [2024-07-26 11:12:31.998707] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.566 [2024-07-26 11:12:31.998711] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.566 [2024-07-26 11:12:31.998714] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd20ec0) 00:24:12.566 [2024-07-26 11:12:31.998720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.566 [2024-07-26 11:12:31.998732] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda42c0, cid 3, qid 0 00:24:12.566 [2024-07-26 11:12:31.998895] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.566 [2024-07-26 11:12:31.998904] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.566 [2024-07-26 11:12:31.998907] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.566 [2024-07-26 11:12:31.998911] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda42c0) on tqpair=0xd20ec0 00:24:12.566 [2024-07-26 11:12:31.998921] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.566 [2024-07-26 11:12:31.998925] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.566 [2024-07-26 11:12:31.998928] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd20ec0) 00:24:12.566 [2024-07-26 11:12:31.998934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.566 [2024-07-26 11:12:31.998947] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda42c0, cid 3, qid 0 00:24:12.566 [2024-07-26 11:12:31.999112] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.566 [2024-07-26 11:12:31.999122] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.566 [2024-07-26 11:12:31.999125] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.566 [2024-07-26 11:12:31.999129] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda42c0) on tqpair=0xd20ec0 00:24:12.566 [2024-07-26 11:12:31.999140] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.566 [2024-07-26 11:12:31.999143] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.566 [2024-07-26 11:12:31.999147] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd20ec0) 00:24:12.566 [2024-07-26 11:12:31.999153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.566 [2024-07-26 11:12:31.999165] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda42c0, cid 3, qid 0 00:24:12.566 [2024-07-26 11:12:31.999324] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.566 [2024-07-26 11:12:31.999337] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.566 [2024-07-26 11:12:31.999340] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.566 [2024-07-26 11:12:31.999343] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda42c0) on tqpair=0xd20ec0 00:24:12.566 [2024-07-26 11:12:31.999354] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.566 [2024-07-26 11:12:31.999358] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.566 [2024-07-26 11:12:31.999361] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd20ec0) 00:24:12.566 [2024-07-26 11:12:31.999367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.566 [2024-07-26 11:12:31.999379] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda42c0, cid 3, qid 0 00:24:12.566 [2024-07-26 11:12:31.999539] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.566 [2024-07-26 11:12:31.999548] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.566 [2024-07-26 11:12:31.999551] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.566 [2024-07-26 11:12:31.999555] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda42c0) on tqpair=0xd20ec0 00:24:12.566 [2024-07-26 11:12:31.999566] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.566 [2024-07-26 11:12:31.999569] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.566 [2024-07-26 11:12:31.999572] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd20ec0) 00:24:12.566 [2024-07-26 11:12:31.999579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.566 [2024-07-26 11:12:31.999591] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda42c0, cid 3, qid 0 00:24:12.566 [2024-07-26 11:12:31.999753] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.566 [2024-07-26 11:12:31.999762] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.566 [2024-07-26 11:12:31.999765] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.566 [2024-07-26 11:12:31.999769] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda42c0) on tqpair=0xd20ec0 00:24:12.566 [2024-07-26 11:12:31.999780] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.566 [2024-07-26 11:12:31.999783] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.566 [2024-07-26 11:12:31.999787] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd20ec0) 00:24:12.566 [2024-07-26 11:12:31.999793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.566 [2024-07-26 11:12:31.999805] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda42c0, cid 3, qid 0 00:24:12.566 [2024-07-26 11:12:31.999962] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.566 [2024-07-26 11:12:31.999971] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.566 [2024-07-26 11:12:31.999975] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.566 [2024-07-26 11:12:31.999978] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda42c0) on tqpair=0xd20ec0 00:24:12.566 [2024-07-26 11:12:31.999989] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.566 [2024-07-26 11:12:31.999992] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.566 [2024-07-26 11:12:31.999995] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd20ec0) 00:24:12.566 [2024-07-26 11:12:32.000002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.566 [2024-07-26 11:12:32.000014] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda42c0, cid 3, qid 0 00:24:12.566 [2024-07-26 11:12:32.003609] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.566 [2024-07-26 11:12:32.003617] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.566 [2024-07-26 11:12:32.003623] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.566 [2024-07-26 11:12:32.003627] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda42c0) on tqpair=0xd20ec0 00:24:12.566 [2024-07-26 11:12:32.003637] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.566 [2024-07-26 11:12:32.003640] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.566 [2024-07-26 11:12:32.003643] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd20ec0) 00:24:12.566 [2024-07-26 11:12:32.003650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.566 [2024-07-26 11:12:32.003661] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xda42c0, cid 3, qid 0 00:24:12.567 [2024-07-26 11:12:32.003969] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.567 [2024-07-26 11:12:32.003978] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.567 [2024-07-26 11:12:32.003982] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.567 [2024-07-26 11:12:32.003985] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xda42c0) on tqpair=0xd20ec0 00:24:12.567 [2024-07-26 11:12:32.003994] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 8 milliseconds 00:24:12.567 00:24:12.567 11:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:12.567 [2024-07-26 11:12:32.044629] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:12.567 [2024-07-26 11:12:32.044662] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1534635 ] 00:24:12.830 EAL: No free 2048 kB hugepages reported on node 1 00:24:12.830 [2024-07-26 11:12:32.074319] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:12.830 [2024-07-26 11:12:32.074358] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:12.830 [2024-07-26 11:12:32.074364] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:12.830 [2024-07-26 11:12:32.074376] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:12.830 [2024-07-26 11:12:32.074383] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:12.830 [2024-07-26 11:12:32.074975] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:12.830 [2024-07-26 11:12:32.074995] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1c7eec0 0 00:24:12.830 [2024-07-26 11:12:32.089049] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:12.830 [2024-07-26 11:12:32.089061] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:12.830 [2024-07-26 11:12:32.089065] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:12.830 [2024-07-26 11:12:32.089068] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:12.830 [2024-07-26 11:12:32.089096] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.830 [2024-07-26 11:12:32.089101] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.830 [2024-07-26 11:12:32.089104] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c7eec0) 00:24:12.830 [2024-07-26 11:12:32.089115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:12.830 [2024-07-26 11:12:32.089130] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d01e40, cid 0, qid 0 00:24:12.830 [2024-07-26 11:12:32.097054] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.830 [2024-07-26 11:12:32.097072] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.830 [2024-07-26 11:12:32.097076] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.830 [2024-07-26 11:12:32.097080] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d01e40) on tqpair=0x1c7eec0 00:24:12.830 [2024-07-26 11:12:32.097090] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:12.830 [2024-07-26 11:12:32.097096] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:12.830 [2024-07-26 11:12:32.097100] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:12.830 [2024-07-26 11:12:32.097111] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.830 [2024-07-26 11:12:32.097114] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.830 [2024-07-26 11:12:32.097117] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c7eec0) 00:24:12.830 [2024-07-26 11:12:32.097124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.830 [2024-07-26 11:12:32.097136] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d01e40, cid 0, qid 0 00:24:12.830 [2024-07-26 11:12:32.097393] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.830 [2024-07-26 11:12:32.097406] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.830 [2024-07-26 11:12:32.097409] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.830 [2024-07-26 11:12:32.097413] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d01e40) on tqpair=0x1c7eec0 00:24:12.830 [2024-07-26 11:12:32.097422] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:12.830 [2024-07-26 11:12:32.097431] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:12.830 [2024-07-26 11:12:32.097439] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.830 [2024-07-26 11:12:32.097442] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.830 [2024-07-26 11:12:32.097445] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c7eec0) 00:24:12.830 [2024-07-26 11:12:32.097453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.830 [2024-07-26 11:12:32.097467] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d01e40, cid 0, qid 0 00:24:12.830 [2024-07-26 11:12:32.097624] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.830 [2024-07-26 11:12:32.097634] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.830 [2024-07-26 11:12:32.097637] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.830 [2024-07-26 11:12:32.097641] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d01e40) on tqpair=0x1c7eec0 00:24:12.830 [2024-07-26 11:12:32.097645] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:12.830 [2024-07-26 11:12:32.097653] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:12.830 [2024-07-26 11:12:32.097661] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.830 [2024-07-26 11:12:32.097664] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.830 [2024-07-26 11:12:32.097667] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c7eec0) 00:24:12.830 [2024-07-26 11:12:32.097674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.830 [2024-07-26 11:12:32.097686] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d01e40, cid 0, qid 0 00:24:12.830 [2024-07-26 11:12:32.097847] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.830 [2024-07-26 11:12:32.097860] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.830 [2024-07-26 11:12:32.097863] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.830 [2024-07-26 11:12:32.097866] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d01e40) on tqpair=0x1c7eec0 00:24:12.830 [2024-07-26 11:12:32.097871] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:12.830 [2024-07-26 11:12:32.097883] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.830 [2024-07-26 11:12:32.097886] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.830 [2024-07-26 11:12:32.097889] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c7eec0) 00:24:12.830 [2024-07-26 11:12:32.097896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.830 [2024-07-26 11:12:32.097909] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d01e40, cid 0, qid 0 00:24:12.830 [2024-07-26 11:12:32.098078] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.830 [2024-07-26 11:12:32.098090] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.830 [2024-07-26 11:12:32.098093] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.830 [2024-07-26 11:12:32.098096] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d01e40) on tqpair=0x1c7eec0 00:24:12.830 [2024-07-26 11:12:32.098100] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:12.830 [2024-07-26 11:12:32.098105] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:12.830 [2024-07-26 11:12:32.098113] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:12.830 [2024-07-26 11:12:32.098218] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:12.830 [2024-07-26 11:12:32.098222] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:12.831 [2024-07-26 11:12:32.098230] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.831 [2024-07-26 11:12:32.098233] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.831 [2024-07-26 11:12:32.098236] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c7eec0) 00:24:12.831 [2024-07-26 11:12:32.098243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.831 [2024-07-26 11:12:32.098256] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d01e40, cid 0, qid 0 00:24:12.831 [2024-07-26 11:12:32.098412] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.831 [2024-07-26 11:12:32.098423] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.831 [2024-07-26 11:12:32.098426] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.831 [2024-07-26 11:12:32.098430] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d01e40) on tqpair=0x1c7eec0 00:24:12.831 [2024-07-26 11:12:32.098434] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:12.831 [2024-07-26 11:12:32.098445] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.831 [2024-07-26 11:12:32.098448] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.831 [2024-07-26 11:12:32.098451] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c7eec0) 00:24:12.831 [2024-07-26 11:12:32.098458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.831 [2024-07-26 11:12:32.098470] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d01e40, cid 0, qid 0 00:24:12.831 [2024-07-26 11:12:32.098625] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.831 [2024-07-26 11:12:32.098637] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.831 [2024-07-26 11:12:32.098640] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.831 [2024-07-26 11:12:32.098643] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d01e40) on tqpair=0x1c7eec0 00:24:12.831 [2024-07-26 11:12:32.098647] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:12.831 [2024-07-26 11:12:32.098651] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:12.831 [2024-07-26 11:12:32.098660] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:12.831 [2024-07-26 11:12:32.098668] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:12.831 [2024-07-26 11:12:32.098678] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.831 [2024-07-26 11:12:32.098681] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c7eec0) 00:24:12.831 [2024-07-26 11:12:32.098689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.831 [2024-07-26 11:12:32.098705] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d01e40, cid 0, qid 0 00:24:12.831 [2024-07-26 11:12:32.098900] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:12.831 [2024-07-26 11:12:32.098914] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:12.831 [2024-07-26 11:12:32.098917] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:12.831 [2024-07-26 11:12:32.098920] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c7eec0): datao=0, datal=4096, cccid=0 00:24:12.831 [2024-07-26 11:12:32.098925] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d01e40) on tqpair(0x1c7eec0): expected_datao=0, payload_size=4096 00:24:12.831 [2024-07-26 11:12:32.098928] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.831 [2024-07-26 11:12:32.099210] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:12.831 [2024-07-26 11:12:32.099215] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:12.831 [2024-07-26 11:12:32.140269] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.831 [2024-07-26 11:12:32.140285] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.831 [2024-07-26 11:12:32.140288] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.831 [2024-07-26 11:12:32.140292] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d01e40) on tqpair=0x1c7eec0 00:24:12.831 [2024-07-26 11:12:32.140299] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:12.831 [2024-07-26 11:12:32.140304] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:12.831 [2024-07-26 11:12:32.140307] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:12.831 [2024-07-26 11:12:32.140311] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:12.831 [2024-07-26 11:12:32.140315] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:12.831 [2024-07-26 11:12:32.140319] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:12.831 [2024-07-26 11:12:32.140328] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:12.831 [2024-07-26 11:12:32.140338] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.831 [2024-07-26 11:12:32.140342] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.831 [2024-07-26 11:12:32.140348] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c7eec0) 00:24:12.831 [2024-07-26 11:12:32.140355] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:12.831 [2024-07-26 11:12:32.140368] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d01e40, cid 0, qid 0 00:24:12.831 [2024-07-26 11:12:32.140525] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.831 [2024-07-26 11:12:32.140535] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.831 [2024-07-26 11:12:32.140538] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.831 [2024-07-26 11:12:32.140541] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d01e40) on tqpair=0x1c7eec0 00:24:12.831 [2024-07-26 11:12:32.140547] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.831 [2024-07-26 11:12:32.140551] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.831 [2024-07-26 11:12:32.140554] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c7eec0) 00:24:12.831 [2024-07-26 11:12:32.140560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.831 [2024-07-26 11:12:32.140565] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.831 [2024-07-26 11:12:32.140569] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.831 [2024-07-26 11:12:32.140572] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1c7eec0) 00:24:12.831 [2024-07-26 11:12:32.140577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.831 [2024-07-26 11:12:32.140581] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.831 [2024-07-26 11:12:32.140585] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.831 [2024-07-26 11:12:32.140588] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1c7eec0) 00:24:12.831 [2024-07-26 11:12:32.140593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.831 [2024-07-26 11:12:32.140598] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.831 [2024-07-26 11:12:32.140601] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.831 [2024-07-26 11:12:32.140604] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c7eec0) 00:24:12.831 [2024-07-26 11:12:32.140608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.831 [2024-07-26 11:12:32.140613] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:12.831 [2024-07-26 11:12:32.140624] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:12.831 [2024-07-26 11:12:32.140630] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.831 [2024-07-26 11:12:32.140633] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c7eec0) 00:24:12.831 [2024-07-26 11:12:32.140639] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.831 [2024-07-26 11:12:32.140652] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d01e40, cid 0, qid 0 00:24:12.831 [2024-07-26 11:12:32.140657] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d01fc0, cid 1, qid 0 00:24:12.831 [2024-07-26 11:12:32.140661] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d02140, cid 2, qid 0 00:24:12.831 [2024-07-26 11:12:32.140665] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d022c0, cid 3, qid 0 00:24:12.831 [2024-07-26 11:12:32.140669] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d02440, cid 4, qid 0 00:24:12.831 [2024-07-26 11:12:32.140865] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.831 [2024-07-26 11:12:32.140874] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.831 [2024-07-26 11:12:32.140878] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.831 [2024-07-26 11:12:32.140881] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d02440) on tqpair=0x1c7eec0 00:24:12.831 [2024-07-26 11:12:32.140885] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:12.832 [2024-07-26 11:12:32.140890] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:12.832 [2024-07-26 11:12:32.140901] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:12.832 [2024-07-26 11:12:32.140907] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:12.832 [2024-07-26 11:12:32.140913] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.832 [2024-07-26 11:12:32.140917] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.832 [2024-07-26 11:12:32.140919] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c7eec0) 00:24:12.832 [2024-07-26 11:12:32.140926] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:12.832 [2024-07-26 11:12:32.140938] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d02440, cid 4, qid 0 00:24:12.832 [2024-07-26 11:12:32.145052] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.832 [2024-07-26 11:12:32.145065] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.832 [2024-07-26 11:12:32.145067] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.832 [2024-07-26 11:12:32.145071] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d02440) on tqpair=0x1c7eec0 00:24:12.832 [2024-07-26 11:12:32.145128] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:12.832 [2024-07-26 11:12:32.145139] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:12.832 [2024-07-26 11:12:32.145147] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.832 [2024-07-26 11:12:32.145151] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c7eec0) 00:24:12.832 [2024-07-26 11:12:32.145157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.832 [2024-07-26 11:12:32.145171] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d02440, cid 4, qid 0 00:24:12.832 [2024-07-26 11:12:32.145637] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:12.832 [2024-07-26 11:12:32.145642] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:12.832 [2024-07-26 11:12:32.145645] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:12.832 [2024-07-26 11:12:32.145648] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c7eec0): datao=0, datal=4096, cccid=4 00:24:12.832 [2024-07-26 11:12:32.145652] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d02440) on tqpair(0x1c7eec0): expected_datao=0, payload_size=4096 00:24:12.832 [2024-07-26 11:12:32.145656] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.832 [2024-07-26 11:12:32.145958] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:12.832 [2024-07-26 11:12:32.145962] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:12.832 [2024-07-26 11:12:32.146391] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.832 [2024-07-26 11:12:32.146397] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.832 [2024-07-26 11:12:32.146400] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.832 [2024-07-26 11:12:32.146405] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d02440) on tqpair=0x1c7eec0 00:24:12.832 [2024-07-26 11:12:32.146414] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:12.832 [2024-07-26 11:12:32.146428] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:12.832 [2024-07-26 11:12:32.146436] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:12.832 [2024-07-26 11:12:32.146442] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.832 [2024-07-26 11:12:32.146446] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c7eec0) 00:24:12.832 [2024-07-26 11:12:32.146451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.832 [2024-07-26 11:12:32.146462] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d02440, cid 4, qid 0 00:24:12.832 [2024-07-26 11:12:32.146640] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:12.832 [2024-07-26 11:12:32.146651] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:12.832 [2024-07-26 11:12:32.146654] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:12.832 [2024-07-26 11:12:32.146657] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c7eec0): datao=0, datal=4096, cccid=4 00:24:12.832 [2024-07-26 11:12:32.146661] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d02440) on tqpair(0x1c7eec0): expected_datao=0, payload_size=4096 00:24:12.832 [2024-07-26 11:12:32.146664] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.832 [2024-07-26 11:12:32.146947] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:12.832 [2024-07-26 11:12:32.146951] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:12.832 [2024-07-26 11:12:32.191049] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.832 [2024-07-26 11:12:32.191058] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.832 [2024-07-26 11:12:32.191061] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.832 [2024-07-26 11:12:32.191064] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d02440) on tqpair=0x1c7eec0 00:24:12.832 [2024-07-26 11:12:32.191077] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:12.832 [2024-07-26 11:12:32.191086] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:12.832 [2024-07-26 11:12:32.191094] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.832 [2024-07-26 11:12:32.191098] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c7eec0) 00:24:12.832 [2024-07-26 11:12:32.191104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.832 [2024-07-26 11:12:32.191117] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d02440, cid 4, qid 0 00:24:12.832 [2024-07-26 11:12:32.191370] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:12.832 [2024-07-26 11:12:32.191380] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:12.832 [2024-07-26 11:12:32.191383] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:12.832 [2024-07-26 11:12:32.191386] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c7eec0): datao=0, datal=4096, cccid=4 00:24:12.832 [2024-07-26 11:12:32.191390] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d02440) on tqpair(0x1c7eec0): expected_datao=0, payload_size=4096 00:24:12.832 [2024-07-26 11:12:32.191393] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.832 [2024-07-26 11:12:32.191681] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:12.832 [2024-07-26 11:12:32.191687] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:12.832 [2024-07-26 11:12:32.232282] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.832 [2024-07-26 11:12:32.232296] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.832 [2024-07-26 11:12:32.232299] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.832 [2024-07-26 11:12:32.232303] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d02440) on tqpair=0x1c7eec0 00:24:12.832 [2024-07-26 11:12:32.232311] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:12.832 [2024-07-26 11:12:32.232320] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:12.832 [2024-07-26 11:12:32.232329] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:12.832 [2024-07-26 11:12:32.232335] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:12.832 [2024-07-26 11:12:32.232340] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:12.832 [2024-07-26 11:12:32.232344] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:12.832 [2024-07-26 11:12:32.232348] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:12.832 [2024-07-26 11:12:32.232352] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:12.832 [2024-07-26 11:12:32.232357] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:12.832 [2024-07-26 11:12:32.232371] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.832 [2024-07-26 11:12:32.232375] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c7eec0) 00:24:12.832 [2024-07-26 11:12:32.232381] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.832 [2024-07-26 11:12:32.232387] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.832 [2024-07-26 11:12:32.232391] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.832 [2024-07-26 11:12:32.232394] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c7eec0) 00:24:12.832 [2024-07-26 11:12:32.232399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.832 [2024-07-26 11:12:32.232414] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d02440, cid 4, qid 0 00:24:12.832 [2024-07-26 11:12:32.232419] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d025c0, cid 5, qid 0 00:24:12.832 [2024-07-26 11:12:32.232592] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.832 [2024-07-26 11:12:32.232601] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.832 [2024-07-26 11:12:32.232604] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.832 [2024-07-26 11:12:32.232608] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d02440) on tqpair=0x1c7eec0 00:24:12.832 [2024-07-26 11:12:32.232614] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.832 [2024-07-26 11:12:32.232619] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.832 [2024-07-26 11:12:32.232622] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.832 [2024-07-26 11:12:32.232625] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d025c0) on tqpair=0x1c7eec0 00:24:12.833 [2024-07-26 11:12:32.232634] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.833 [2024-07-26 11:12:32.232638] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c7eec0) 00:24:12.833 [2024-07-26 11:12:32.232647] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.833 [2024-07-26 11:12:32.232659] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d025c0, cid 5, qid 0 00:24:12.833 [2024-07-26 11:12:32.232819] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.833 [2024-07-26 11:12:32.232828] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.833 [2024-07-26 11:12:32.232831] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.833 [2024-07-26 11:12:32.232835] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d025c0) on tqpair=0x1c7eec0 00:24:12.833 [2024-07-26 11:12:32.232845] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.833 [2024-07-26 11:12:32.232848] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c7eec0) 00:24:12.833 [2024-07-26 11:12:32.232855] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.833 [2024-07-26 11:12:32.232866] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d025c0, cid 5, qid 0 00:24:12.833 [2024-07-26 11:12:32.233036] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.833 [2024-07-26 11:12:32.233051] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.833 [2024-07-26 11:12:32.233054] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.833 [2024-07-26 11:12:32.233058] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d025c0) on tqpair=0x1c7eec0 00:24:12.833 [2024-07-26 11:12:32.233068] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.833 [2024-07-26 11:12:32.233072] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c7eec0) 00:24:12.833 [2024-07-26 11:12:32.233078] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.833 [2024-07-26 11:12:32.233091] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d025c0, cid 5, qid 0 00:24:12.833 [2024-07-26 11:12:32.233255] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.833 [2024-07-26 11:12:32.233265] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.833 [2024-07-26 11:12:32.233268] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.833 [2024-07-26 11:12:32.233271] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d025c0) on tqpair=0x1c7eec0 00:24:12.833 [2024-07-26 11:12:32.233288] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.833 [2024-07-26 11:12:32.233292] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c7eec0) 00:24:12.833 [2024-07-26 11:12:32.233298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.833 [2024-07-26 11:12:32.233305] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.833 [2024-07-26 11:12:32.233308] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c7eec0) 00:24:12.833 [2024-07-26 11:12:32.233313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.833 [2024-07-26 11:12:32.233319] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.833 [2024-07-26 11:12:32.233322] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1c7eec0) 00:24:12.833 [2024-07-26 11:12:32.233327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.833 [2024-07-26 11:12:32.233333] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.833 [2024-07-26 11:12:32.233337] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1c7eec0) 00:24:12.833 [2024-07-26 11:12:32.233342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.833 [2024-07-26 11:12:32.233357] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d025c0, cid 5, qid 0 00:24:12.833 [2024-07-26 11:12:32.233362] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d02440, cid 4, qid 0 00:24:12.833 [2024-07-26 11:12:32.233366] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d02740, cid 6, qid 0 00:24:12.833 [2024-07-26 11:12:32.233369] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d028c0, cid 7, qid 0 00:24:12.833 [2024-07-26 11:12:32.233711] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:12.833 [2024-07-26 11:12:32.233721] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:12.833 [2024-07-26 11:12:32.233724] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:12.833 [2024-07-26 11:12:32.233727] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c7eec0): datao=0, datal=8192, cccid=5 00:24:12.833 [2024-07-26 11:12:32.233731] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d025c0) on tqpair(0x1c7eec0): expected_datao=0, payload_size=8192 00:24:12.833 [2024-07-26 11:12:32.233734] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.833 [2024-07-26 11:12:32.233741] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:12.833 [2024-07-26 11:12:32.233744] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:12.833 [2024-07-26 11:12:32.233749] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:12.833 [2024-07-26 11:12:32.233753] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:12.833 [2024-07-26 11:12:32.233756] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:12.833 [2024-07-26 11:12:32.233759] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c7eec0): datao=0, datal=512, cccid=4 00:24:12.833 [2024-07-26 11:12:32.233763] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d02440) on tqpair(0x1c7eec0): expected_datao=0, payload_size=512 00:24:12.833 [2024-07-26 11:12:32.233767] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.833 [2024-07-26 11:12:32.233772] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:12.833 [2024-07-26 11:12:32.233775] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:12.833 [2024-07-26 11:12:32.233779] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:12.833 [2024-07-26 11:12:32.233784] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:12.833 [2024-07-26 11:12:32.233787] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:12.833 [2024-07-26 11:12:32.233790] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c7eec0): datao=0, datal=512, cccid=6 00:24:12.833 [2024-07-26 11:12:32.233794] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d02740) on tqpair(0x1c7eec0): expected_datao=0, payload_size=512 00:24:12.833 [2024-07-26 11:12:32.233797] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.833 [2024-07-26 11:12:32.233802] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:12.833 [2024-07-26 11:12:32.233805] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:12.833 [2024-07-26 11:12:32.233810] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:12.833 [2024-07-26 11:12:32.233815] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:12.833 [2024-07-26 11:12:32.233817] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:12.833 [2024-07-26 11:12:32.233820] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c7eec0): datao=0, datal=4096, cccid=7 00:24:12.833 [2024-07-26 11:12:32.233824] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d028c0) on tqpair(0x1c7eec0): expected_datao=0, payload_size=4096 00:24:12.833 [2024-07-26 11:12:32.233828] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.833 [2024-07-26 11:12:32.233833] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:12.833 [2024-07-26 11:12:32.233836] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:12.833 [2024-07-26 11:12:32.234085] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.833 [2024-07-26 11:12:32.234091] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.833 [2024-07-26 11:12:32.234094] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.833 [2024-07-26 11:12:32.234098] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d025c0) on tqpair=0x1c7eec0 00:24:12.833 [2024-07-26 11:12:32.234109] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.833 [2024-07-26 11:12:32.234114] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.833 [2024-07-26 11:12:32.234116] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.833 [2024-07-26 11:12:32.234120] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d02440) on tqpair=0x1c7eec0 00:24:12.833 [2024-07-26 11:12:32.234128] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.833 [2024-07-26 11:12:32.234133] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.833 [2024-07-26 11:12:32.234136] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.833 [2024-07-26 11:12:32.234139] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d02740) on tqpair=0x1c7eec0 00:24:12.833 [2024-07-26 11:12:32.234145] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.833 [2024-07-26 11:12:32.234149] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.833 [2024-07-26 11:12:32.234152] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.833 [2024-07-26 11:12:32.234156] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d028c0) on tqpair=0x1c7eec0 00:24:12.833 ===================================================== 00:24:12.833 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:12.833 ===================================================== 00:24:12.833 Controller Capabilities/Features 00:24:12.833 ================================ 00:24:12.833 Vendor ID: 8086 00:24:12.833 Subsystem Vendor ID: 8086 00:24:12.834 Serial Number: SPDK00000000000001 00:24:12.834 Model Number: SPDK bdev Controller 00:24:12.834 Firmware Version: 24.09 00:24:12.834 Recommended Arb Burst: 6 00:24:12.834 IEEE OUI Identifier: e4 d2 5c 00:24:12.834 Multi-path I/O 00:24:12.834 May have multiple subsystem ports: Yes 00:24:12.834 May have multiple controllers: Yes 00:24:12.834 Associated with SR-IOV VF: No 00:24:12.834 Max Data Transfer Size: 131072 00:24:12.834 Max Number of Namespaces: 32 00:24:12.834 Max Number of I/O Queues: 127 00:24:12.834 NVMe Specification Version (VS): 1.3 00:24:12.834 NVMe Specification Version (Identify): 1.3 00:24:12.834 Maximum Queue Entries: 128 00:24:12.834 Contiguous Queues Required: Yes 00:24:12.834 Arbitration Mechanisms Supported 00:24:12.834 Weighted Round Robin: Not Supported 00:24:12.834 Vendor Specific: Not Supported 00:24:12.834 Reset Timeout: 15000 ms 00:24:12.834 Doorbell Stride: 4 bytes 00:24:12.834 NVM Subsystem Reset: Not Supported 00:24:12.834 Command Sets Supported 00:24:12.834 NVM Command Set: Supported 00:24:12.834 Boot Partition: Not Supported 00:24:12.834 Memory Page Size Minimum: 4096 bytes 00:24:12.834 Memory Page Size Maximum: 4096 bytes 00:24:12.834 Persistent Memory Region: Not Supported 00:24:12.834 Optional Asynchronous Events Supported 00:24:12.834 Namespace Attribute Notices: Supported 00:24:12.834 Firmware Activation Notices: Not Supported 00:24:12.834 ANA Change Notices: Not Supported 00:24:12.834 PLE Aggregate Log Change Notices: Not Supported 00:24:12.834 LBA Status Info Alert Notices: Not Supported 00:24:12.834 EGE Aggregate Log Change Notices: Not Supported 00:24:12.834 Normal NVM Subsystem Shutdown event: Not Supported 00:24:12.834 Zone Descriptor Change Notices: Not Supported 00:24:12.834 Discovery Log Change Notices: Not Supported 00:24:12.834 Controller Attributes 00:24:12.834 128-bit Host Identifier: Supported 00:24:12.834 Non-Operational Permissive Mode: Not Supported 00:24:12.834 NVM Sets: Not Supported 00:24:12.834 Read Recovery Levels: Not Supported 00:24:12.834 Endurance Groups: Not Supported 00:24:12.834 Predictable Latency Mode: Not Supported 00:24:12.834 Traffic Based Keep ALive: Not Supported 00:24:12.834 Namespace Granularity: Not Supported 00:24:12.834 SQ Associations: Not Supported 00:24:12.834 UUID List: Not Supported 00:24:12.834 Multi-Domain Subsystem: Not Supported 00:24:12.834 Fixed Capacity Management: Not Supported 00:24:12.834 Variable Capacity Management: Not Supported 00:24:12.834 Delete Endurance Group: Not Supported 00:24:12.834 Delete NVM Set: Not Supported 00:24:12.834 Extended LBA Formats Supported: Not Supported 00:24:12.834 Flexible Data Placement Supported: Not Supported 00:24:12.834 00:24:12.834 Controller Memory Buffer Support 00:24:12.834 ================================ 00:24:12.834 Supported: No 00:24:12.834 00:24:12.834 Persistent Memory Region Support 00:24:12.834 ================================ 00:24:12.834 Supported: No 00:24:12.834 00:24:12.834 Admin Command Set Attributes 00:24:12.834 ============================ 00:24:12.834 Security Send/Receive: Not Supported 00:24:12.834 Format NVM: Not Supported 00:24:12.834 Firmware Activate/Download: Not Supported 00:24:12.834 Namespace Management: Not Supported 00:24:12.834 Device Self-Test: Not Supported 00:24:12.834 Directives: Not Supported 00:24:12.834 NVMe-MI: Not Supported 00:24:12.834 Virtualization Management: Not Supported 00:24:12.834 Doorbell Buffer Config: Not Supported 00:24:12.834 Get LBA Status Capability: Not Supported 00:24:12.834 Command & Feature Lockdown Capability: Not Supported 00:24:12.834 Abort Command Limit: 4 00:24:12.834 Async Event Request Limit: 4 00:24:12.834 Number of Firmware Slots: N/A 00:24:12.834 Firmware Slot 1 Read-Only: N/A 00:24:12.834 Firmware Activation Without Reset: N/A 00:24:12.834 Multiple Update Detection Support: N/A 00:24:12.834 Firmware Update Granularity: No Information Provided 00:24:12.834 Per-Namespace SMART Log: No 00:24:12.834 Asymmetric Namespace Access Log Page: Not Supported 00:24:12.834 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:12.834 Command Effects Log Page: Supported 00:24:12.834 Get Log Page Extended Data: Supported 00:24:12.834 Telemetry Log Pages: Not Supported 00:24:12.834 Persistent Event Log Pages: Not Supported 00:24:12.834 Supported Log Pages Log Page: May Support 00:24:12.834 Commands Supported & Effects Log Page: Not Supported 00:24:12.834 Feature Identifiers & Effects Log Page:May Support 00:24:12.834 NVMe-MI Commands & Effects Log Page: May Support 00:24:12.834 Data Area 4 for Telemetry Log: Not Supported 00:24:12.834 Error Log Page Entries Supported: 128 00:24:12.834 Keep Alive: Supported 00:24:12.834 Keep Alive Granularity: 10000 ms 00:24:12.834 00:24:12.834 NVM Command Set Attributes 00:24:12.834 ========================== 00:24:12.834 Submission Queue Entry Size 00:24:12.834 Max: 64 00:24:12.834 Min: 64 00:24:12.834 Completion Queue Entry Size 00:24:12.834 Max: 16 00:24:12.834 Min: 16 00:24:12.834 Number of Namespaces: 32 00:24:12.834 Compare Command: Supported 00:24:12.834 Write Uncorrectable Command: Not Supported 00:24:12.834 Dataset Management Command: Supported 00:24:12.834 Write Zeroes Command: Supported 00:24:12.834 Set Features Save Field: Not Supported 00:24:12.834 Reservations: Supported 00:24:12.834 Timestamp: Not Supported 00:24:12.834 Copy: Supported 00:24:12.834 Volatile Write Cache: Present 00:24:12.834 Atomic Write Unit (Normal): 1 00:24:12.834 Atomic Write Unit (PFail): 1 00:24:12.834 Atomic Compare & Write Unit: 1 00:24:12.834 Fused Compare & Write: Supported 00:24:12.834 Scatter-Gather List 00:24:12.834 SGL Command Set: Supported 00:24:12.834 SGL Keyed: Supported 00:24:12.834 SGL Bit Bucket Descriptor: Not Supported 00:24:12.834 SGL Metadata Pointer: Not Supported 00:24:12.834 Oversized SGL: Not Supported 00:24:12.834 SGL Metadata Address: Not Supported 00:24:12.834 SGL Offset: Supported 00:24:12.834 Transport SGL Data Block: Not Supported 00:24:12.834 Replay Protected Memory Block: Not Supported 00:24:12.834 00:24:12.834 Firmware Slot Information 00:24:12.834 ========================= 00:24:12.834 Active slot: 1 00:24:12.834 Slot 1 Firmware Revision: 24.09 00:24:12.834 00:24:12.834 00:24:12.834 Commands Supported and Effects 00:24:12.834 ============================== 00:24:12.834 Admin Commands 00:24:12.834 -------------- 00:24:12.834 Get Log Page (02h): Supported 00:24:12.834 Identify (06h): Supported 00:24:12.834 Abort (08h): Supported 00:24:12.834 Set Features (09h): Supported 00:24:12.834 Get Features (0Ah): Supported 00:24:12.834 Asynchronous Event Request (0Ch): Supported 00:24:12.834 Keep Alive (18h): Supported 00:24:12.834 I/O Commands 00:24:12.834 ------------ 00:24:12.834 Flush (00h): Supported LBA-Change 00:24:12.834 Write (01h): Supported LBA-Change 00:24:12.834 Read (02h): Supported 00:24:12.834 Compare (05h): Supported 00:24:12.834 Write Zeroes (08h): Supported LBA-Change 00:24:12.834 Dataset Management (09h): Supported LBA-Change 00:24:12.834 Copy (19h): Supported LBA-Change 00:24:12.834 00:24:12.834 Error Log 00:24:12.834 ========= 00:24:12.834 00:24:12.834 Arbitration 00:24:12.834 =========== 00:24:12.834 Arbitration Burst: 1 00:24:12.834 00:24:12.834 Power Management 00:24:12.834 ================ 00:24:12.834 Number of Power States: 1 00:24:12.834 Current Power State: Power State #0 00:24:12.834 Power State #0: 00:24:12.834 Max Power: 0.00 W 00:24:12.834 Non-Operational State: Operational 00:24:12.834 Entry Latency: Not Reported 00:24:12.834 Exit Latency: Not Reported 00:24:12.834 Relative Read Throughput: 0 00:24:12.834 Relative Read Latency: 0 00:24:12.834 Relative Write Throughput: 0 00:24:12.834 Relative Write Latency: 0 00:24:12.834 Idle Power: Not Reported 00:24:12.834 Active Power: Not Reported 00:24:12.834 Non-Operational Permissive Mode: Not Supported 00:24:12.834 00:24:12.834 Health Information 00:24:12.834 ================== 00:24:12.834 Critical Warnings: 00:24:12.834 Available Spare Space: OK 00:24:12.834 Temperature: OK 00:24:12.834 Device Reliability: OK 00:24:12.834 Read Only: No 00:24:12.834 Volatile Memory Backup: OK 00:24:12.834 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:12.834 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:12.834 Available Spare: 0% 00:24:12.834 Available Spare Threshold: 0% 00:24:12.834 Life Percentage Used:[2024-07-26 11:12:32.234237] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.835 [2024-07-26 11:12:32.234242] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1c7eec0) 00:24:12.835 [2024-07-26 11:12:32.234248] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.835 [2024-07-26 11:12:32.234260] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d028c0, cid 7, qid 0 00:24:12.835 [2024-07-26 11:12:32.234438] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.835 [2024-07-26 11:12:32.234448] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.835 [2024-07-26 11:12:32.234451] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.835 [2024-07-26 11:12:32.234455] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d028c0) on tqpair=0x1c7eec0 00:24:12.835 [2024-07-26 11:12:32.234485] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:12.835 [2024-07-26 11:12:32.234495] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d01e40) on tqpair=0x1c7eec0 00:24:12.835 [2024-07-26 11:12:32.234501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.835 [2024-07-26 11:12:32.234506] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d01fc0) on tqpair=0x1c7eec0 00:24:12.835 [2024-07-26 11:12:32.234509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.835 [2024-07-26 11:12:32.234513] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d02140) on tqpair=0x1c7eec0 00:24:12.835 [2024-07-26 11:12:32.234517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.835 [2024-07-26 11:12:32.234522] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d022c0) on tqpair=0x1c7eec0 00:24:12.835 [2024-07-26 11:12:32.234525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.835 [2024-07-26 11:12:32.234533] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.835 [2024-07-26 11:12:32.234536] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.835 [2024-07-26 11:12:32.234539] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c7eec0) 00:24:12.835 [2024-07-26 11:12:32.234548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.835 [2024-07-26 11:12:32.234561] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d022c0, cid 3, qid 0 00:24:12.835 [2024-07-26 11:12:32.234716] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.835 [2024-07-26 11:12:32.234726] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.835 [2024-07-26 11:12:32.234729] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.835 [2024-07-26 11:12:32.234732] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d022c0) on tqpair=0x1c7eec0 00:24:12.835 [2024-07-26 11:12:32.234739] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.835 [2024-07-26 11:12:32.234742] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.835 [2024-07-26 11:12:32.234745] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c7eec0) 00:24:12.835 [2024-07-26 11:12:32.234751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.835 [2024-07-26 11:12:32.234767] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d022c0, cid 3, qid 0 00:24:12.835 [2024-07-26 11:12:32.238051] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.835 [2024-07-26 11:12:32.238063] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.835 [2024-07-26 11:12:32.238066] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.835 [2024-07-26 11:12:32.238070] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d022c0) on tqpair=0x1c7eec0 00:24:12.835 [2024-07-26 11:12:32.238074] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:12.835 [2024-07-26 11:12:32.238078] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:12.835 [2024-07-26 11:12:32.238089] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:12.835 [2024-07-26 11:12:32.238092] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:12.835 [2024-07-26 11:12:32.238095] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c7eec0) 00:24:12.835 [2024-07-26 11:12:32.238102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.835 [2024-07-26 11:12:32.238115] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d022c0, cid 3, qid 0 00:24:12.835 [2024-07-26 11:12:32.238362] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:12.835 [2024-07-26 11:12:32.238372] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:12.835 [2024-07-26 11:12:32.238374] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:12.835 [2024-07-26 11:12:32.238378] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d022c0) on tqpair=0x1c7eec0 00:24:12.835 [2024-07-26 11:12:32.238386] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 0 milliseconds 00:24:12.835 0% 00:24:12.835 Data Units Read: 0 00:24:12.835 Data Units Written: 0 00:24:12.835 Host Read Commands: 0 00:24:12.835 Host Write Commands: 0 00:24:12.835 Controller Busy Time: 0 minutes 00:24:12.835 Power Cycles: 0 00:24:12.835 Power On Hours: 0 hours 00:24:12.835 Unsafe Shutdowns: 0 00:24:12.835 Unrecoverable Media Errors: 0 00:24:12.835 Lifetime Error Log Entries: 0 00:24:12.835 Warning Temperature Time: 0 minutes 00:24:12.835 Critical Temperature Time: 0 minutes 00:24:12.835 00:24:12.835 Number of Queues 00:24:12.835 ================ 00:24:12.835 Number of I/O Submission Queues: 127 00:24:12.835 Number of I/O Completion Queues: 127 00:24:12.835 00:24:12.835 Active Namespaces 00:24:12.835 ================= 00:24:12.835 Namespace ID:1 00:24:12.835 Error Recovery Timeout: Unlimited 00:24:12.835 Command Set Identifier: NVM (00h) 00:24:12.835 Deallocate: Supported 00:24:12.835 Deallocated/Unwritten Error: Not Supported 00:24:12.835 Deallocated Read Value: Unknown 00:24:12.835 Deallocate in Write Zeroes: Not Supported 00:24:12.835 Deallocated Guard Field: 0xFFFF 00:24:12.835 Flush: Supported 00:24:12.835 Reservation: Supported 00:24:12.835 Namespace Sharing Capabilities: Multiple Controllers 00:24:12.835 Size (in LBAs): 131072 (0GiB) 00:24:12.835 Capacity (in LBAs): 131072 (0GiB) 00:24:12.835 Utilization (in LBAs): 131072 (0GiB) 00:24:12.835 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:12.835 EUI64: ABCDEF0123456789 00:24:12.835 UUID: 951abb45-9baf-4969-81b6-f99f9cd48d48 00:24:12.835 Thin Provisioning: Not Supported 00:24:12.835 Per-NS Atomic Units: Yes 00:24:12.835 Atomic Boundary Size (Normal): 0 00:24:12.835 Atomic Boundary Size (PFail): 0 00:24:12.835 Atomic Boundary Offset: 0 00:24:12.835 Maximum Single Source Range Length: 65535 00:24:12.835 Maximum Copy Length: 65535 00:24:12.835 Maximum Source Range Count: 1 00:24:12.835 NGUID/EUI64 Never Reused: No 00:24:12.835 Namespace Write Protected: No 00:24:12.835 Number of LBA Formats: 1 00:24:12.835 Current LBA Format: LBA Format #00 00:24:12.835 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:12.835 00:24:12.835 11:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:12.835 11:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:12.835 11:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.835 11:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:12.835 11:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.835 11:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:12.835 11:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:12.835 11:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:12.836 11:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:24:12.836 11:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:12.836 11:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:24:12.836 11:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:12.836 11:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:12.836 rmmod nvme_tcp 00:24:12.836 rmmod nvme_fabrics 00:24:12.836 rmmod nvme_keyring 00:24:12.836 11:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:12.836 11:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:24:12.836 11:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:24:12.836 11:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1534382 ']' 00:24:12.836 11:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1534382 00:24:12.836 11:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 1534382 ']' 00:24:12.836 11:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 1534382 00:24:12.836 11:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:24:12.836 11:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:12.836 11:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1534382 00:24:13.096 11:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:13.096 11:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:13.096 11:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1534382' 00:24:13.096 killing process with pid 1534382 00:24:13.096 11:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 1534382 00:24:13.096 11:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 1534382 00:24:13.096 11:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:13.096 11:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:13.096 11:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:13.096 11:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:13.096 11:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:13.096 11:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:13.096 11:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:13.096 11:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:15.640 11:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:15.640 00:24:15.640 real 0m9.479s 00:24:15.640 user 0m7.736s 00:24:15.640 sys 0m4.600s 00:24:15.640 11:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:15.640 11:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:15.640 ************************************ 00:24:15.640 END TEST nvmf_identify 00:24:15.640 ************************************ 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.641 ************************************ 00:24:15.641 START TEST nvmf_perf 00:24:15.641 ************************************ 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:15.641 * Looking for test storage... 00:24:15.641 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:15.641 11:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:20.925 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:20.925 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:20.925 Found net devices under 0000:86:00.0: cvl_0_0 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.925 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:20.926 Found net devices under 0000:86:00.1: cvl_0_1 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:20.926 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:20.926 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:24:20.926 00:24:20.926 --- 10.0.0.2 ping statistics --- 00:24:20.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.926 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:20.926 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:20.926 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.397 ms 00:24:20.926 00:24:20.926 --- 10.0.0.1 ping statistics --- 00:24:20.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.926 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1537930 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1537930 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 1537930 ']' 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:20.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:20.926 11:12:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:20.926 [2024-07-26 11:12:39.815335] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:20.926 [2024-07-26 11:12:39.815380] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:20.926 EAL: No free 2048 kB hugepages reported on node 1 00:24:20.926 [2024-07-26 11:12:39.872755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:20.926 [2024-07-26 11:12:39.953845] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:20.926 [2024-07-26 11:12:39.953882] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:20.926 [2024-07-26 11:12:39.953890] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:20.926 [2024-07-26 11:12:39.953896] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:20.926 [2024-07-26 11:12:39.953901] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:20.926 [2024-07-26 11:12:39.953937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:20.926 [2024-07-26 11:12:39.954034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:20.926 [2024-07-26 11:12:39.954107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:20.926 [2024-07-26 11:12:39.954109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.187 11:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:21.187 11:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:24:21.187 11:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:21.187 11:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:21.187 11:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:21.187 11:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:21.187 11:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:21.187 11:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:24.566 11:12:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:24.566 11:12:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:24.566 11:12:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:24:24.566 11:12:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:24.826 11:12:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:24.826 11:12:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:24:24.826 11:12:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:24.826 11:12:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:24.826 11:12:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:24.826 [2024-07-26 11:12:44.227652] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:24.826 11:12:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:25.086 11:12:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:25.086 11:12:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:25.345 11:12:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:25.345 11:12:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:25.345 11:12:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:25.604 [2024-07-26 11:12:44.966480] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:25.604 11:12:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:25.864 11:12:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:24:25.864 11:12:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:24:25.864 11:12:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:25.864 11:12:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:24:27.246 Initializing NVMe Controllers 00:24:27.246 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:24:27.246 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:24:27.246 Initialization complete. Launching workers. 00:24:27.246 ======================================================== 00:24:27.246 Latency(us) 00:24:27.246 Device Information : IOPS MiB/s Average min max 00:24:27.246 PCIE (0000:5e:00.0) NSID 1 from core 0: 97055.54 379.12 329.13 43.90 6238.10 00:24:27.246 ======================================================== 00:24:27.246 Total : 97055.54 379.12 329.13 43.90 6238.10 00:24:27.246 00:24:27.246 11:12:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:27.246 EAL: No free 2048 kB hugepages reported on node 1 00:24:28.628 Initializing NVMe Controllers 00:24:28.628 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:28.628 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:28.628 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:28.628 Initialization complete. Launching workers. 00:24:28.628 ======================================================== 00:24:28.628 Latency(us) 00:24:28.628 Device Information : IOPS MiB/s Average min max 00:24:28.628 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 108.00 0.42 9314.80 622.40 45482.92 00:24:28.628 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 48.00 0.19 21259.74 5994.87 47903.23 00:24:28.628 ======================================================== 00:24:28.628 Total : 156.00 0.61 12990.17 622.40 47903.23 00:24:28.628 00:24:28.628 11:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:28.628 EAL: No free 2048 kB hugepages reported on node 1 00:24:30.008 Initializing NVMe Controllers 00:24:30.008 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:30.008 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:30.008 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:30.008 Initialization complete. Launching workers. 00:24:30.008 ======================================================== 00:24:30.008 Latency(us) 00:24:30.008 Device Information : IOPS MiB/s Average min max 00:24:30.008 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7771.43 30.36 4117.34 810.02 11104.94 00:24:30.008 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3800.03 14.84 8435.83 6391.89 16203.44 00:24:30.008 ======================================================== 00:24:30.008 Total : 11571.46 45.20 5535.52 810.02 16203.44 00:24:30.008 00:24:30.008 11:12:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:30.008 11:12:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:30.008 11:12:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:30.008 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.548 Initializing NVMe Controllers 00:24:32.548 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:32.548 Controller IO queue size 128, less than required. 00:24:32.548 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:32.548 Controller IO queue size 128, less than required. 00:24:32.548 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:32.548 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:32.548 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:32.548 Initialization complete. Launching workers. 00:24:32.548 ======================================================== 00:24:32.548 Latency(us) 00:24:32.548 Device Information : IOPS MiB/s Average min max 00:24:32.548 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 739.50 184.87 181014.49 125962.83 319963.86 00:24:32.548 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 557.00 139.25 236482.61 93791.02 358385.20 00:24:32.548 ======================================================== 00:24:32.548 Total : 1296.50 324.12 204844.61 93791.02 358385.20 00:24:32.548 00:24:32.548 11:12:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:32.548 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.548 No valid NVMe controllers or AIO or URING devices found 00:24:32.548 Initializing NVMe Controllers 00:24:32.548 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:32.548 Controller IO queue size 128, less than required. 00:24:32.548 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:32.548 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:32.548 Controller IO queue size 128, less than required. 00:24:32.548 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:32.548 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:32.548 WARNING: Some requested NVMe devices were skipped 00:24:32.548 11:12:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:32.548 EAL: No free 2048 kB hugepages reported on node 1 00:24:35.084 Initializing NVMe Controllers 00:24:35.084 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:35.084 Controller IO queue size 128, less than required. 00:24:35.084 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:35.084 Controller IO queue size 128, less than required. 00:24:35.084 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:35.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:35.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:35.084 Initialization complete. Launching workers. 00:24:35.084 00:24:35.084 ==================== 00:24:35.084 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:35.084 TCP transport: 00:24:35.084 polls: 62913 00:24:35.084 idle_polls: 19349 00:24:35.084 sock_completions: 43564 00:24:35.084 nvme_completions: 2953 00:24:35.084 submitted_requests: 4444 00:24:35.084 queued_requests: 1 00:24:35.084 00:24:35.084 ==================== 00:24:35.084 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:35.084 TCP transport: 00:24:35.084 polls: 66506 00:24:35.084 idle_polls: 20684 00:24:35.084 sock_completions: 45822 00:24:35.084 nvme_completions: 3081 00:24:35.084 submitted_requests: 4604 00:24:35.084 queued_requests: 1 00:24:35.084 ======================================================== 00:24:35.084 Latency(us) 00:24:35.084 Device Information : IOPS MiB/s Average min max 00:24:35.084 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 737.99 184.50 179571.30 90806.07 278367.29 00:24:35.084 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 769.99 192.50 172713.82 95806.74 272084.19 00:24:35.084 ======================================================== 00:24:35.084 Total : 1507.98 377.00 176069.80 90806.07 278367.29 00:24:35.084 00:24:35.084 11:12:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:35.084 11:12:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:35.343 11:12:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:35.343 11:12:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:35.343 11:12:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:35.343 11:12:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:35.343 11:12:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:24:35.343 11:12:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:35.343 11:12:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:24:35.343 11:12:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:35.343 11:12:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:35.343 rmmod nvme_tcp 00:24:35.343 rmmod nvme_fabrics 00:24:35.343 rmmod nvme_keyring 00:24:35.343 11:12:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:35.343 11:12:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:24:35.343 11:12:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:24:35.343 11:12:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1537930 ']' 00:24:35.343 11:12:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1537930 00:24:35.343 11:12:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 1537930 ']' 00:24:35.343 11:12:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 1537930 00:24:35.343 11:12:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:24:35.343 11:12:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:35.343 11:12:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1537930 00:24:35.602 11:12:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:35.602 11:12:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:35.602 11:12:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1537930' 00:24:35.602 killing process with pid 1537930 00:24:35.602 11:12:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 1537930 00:24:35.602 11:12:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 1537930 00:24:36.983 11:12:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:36.983 11:12:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:36.983 11:12:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:36.983 11:12:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:36.983 11:12:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:36.983 11:12:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.983 11:12:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:36.983 11:12:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:39.523 00:24:39.523 real 0m23.741s 00:24:39.523 user 1m5.850s 00:24:39.523 sys 0m6.416s 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:39.523 ************************************ 00:24:39.523 END TEST nvmf_perf 00:24:39.523 ************************************ 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.523 ************************************ 00:24:39.523 START TEST nvmf_fio_host 00:24:39.523 ************************************ 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:39.523 * Looking for test storage... 00:24:39.523 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:24:39.523 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:39.524 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:39.524 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:39.524 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:39.524 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:39.524 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:39.524 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:39.524 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:39.524 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:39.524 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:39.524 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:39.524 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:39.524 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:39.524 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:39.524 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:39.524 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.524 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:39.524 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.524 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:39.524 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:39.524 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:39.524 11:12:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.880 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:44.880 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:44.880 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:44.880 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:44.880 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:44.880 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:44.880 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:44.880 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:44.880 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:44.880 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:24:44.880 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:44.880 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:24:44.880 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:44.880 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:24:44.880 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:44.880 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:44.880 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:44.880 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:44.880 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:44.880 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:44.880 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:44.880 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:44.880 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:44.880 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:44.880 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:44.880 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:44.880 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:44.880 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:44.880 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:44.881 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:44.881 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:44.881 Found net devices under 0000:86:00.0: cvl_0_0 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:44.881 Found net devices under 0000:86:00.1: cvl_0_1 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:44.881 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:44.881 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:24:44.881 00:24:44.881 --- 10.0.0.2 ping statistics --- 00:24:44.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.881 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:44.881 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:44.881 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.426 ms 00:24:44.881 00:24:44.881 --- 10.0.0.1 ping statistics --- 00:24:44.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.881 rtt min/avg/max/mdev = 0.426/0.426/0.426/0.000 ms 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1544032 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1544032 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 1544032 ']' 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:44.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:44.881 11:13:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.881 [2024-07-26 11:13:03.979093] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:44.881 [2024-07-26 11:13:03.979137] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:44.881 EAL: No free 2048 kB hugepages reported on node 1 00:24:44.881 [2024-07-26 11:13:04.035781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:44.881 [2024-07-26 11:13:04.116915] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:44.881 [2024-07-26 11:13:04.116954] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:44.881 [2024-07-26 11:13:04.116961] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:44.881 [2024-07-26 11:13:04.116967] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:44.881 [2024-07-26 11:13:04.116972] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:44.881 [2024-07-26 11:13:04.117011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:44.882 [2024-07-26 11:13:04.117116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:44.882 [2024-07-26 11:13:04.117142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:44.882 [2024-07-26 11:13:04.117143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:45.452 11:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:45.452 11:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:24:45.452 11:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:45.712 [2024-07-26 11:13:04.948848] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:45.712 11:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:45.712 11:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:45.712 11:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.712 11:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:45.712 Malloc1 00:24:45.712 11:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:45.971 11:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:46.230 11:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:46.490 [2024-07-26 11:13:05.743214] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:46.490 11:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:46.490 11:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:46.490 11:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:46.490 11:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:46.490 11:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:46.490 11:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:46.490 11:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:46.490 11:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:46.490 11:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:46.490 11:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:46.490 11:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:46.490 11:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:46.490 11:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:46.490 11:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:46.490 11:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:46.490 11:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:46.490 11:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:46.490 11:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:46.490 11:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:46.490 11:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:46.748 11:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:46.748 11:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:46.748 11:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:46.748 11:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:47.007 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:47.007 fio-3.35 00:24:47.007 Starting 1 thread 00:24:47.007 EAL: No free 2048 kB hugepages reported on node 1 00:24:49.548 00:24:49.548 test: (groupid=0, jobs=1): err= 0: pid=1544623: Fri Jul 26 11:13:08 2024 00:24:49.548 read: IOPS=11.5k, BW=44.8MiB/s (47.0MB/s)(89.8MiB/2004msec) 00:24:49.548 slat (nsec): min=1584, max=247822, avg=1735.26, stdev=2310.83 00:24:49.548 clat (usec): min=2981, max=19877, avg=6644.93, stdev=1358.47 00:24:49.548 lat (usec): min=2982, max=19888, avg=6646.66, stdev=1358.77 00:24:49.548 clat percentiles (usec): 00:24:49.548 | 1.00th=[ 4490], 5.00th=[ 5014], 10.00th=[ 5342], 20.00th=[ 5735], 00:24:49.548 | 30.00th=[ 5932], 40.00th=[ 6194], 50.00th=[ 6390], 60.00th=[ 6652], 00:24:49.548 | 70.00th=[ 6980], 80.00th=[ 7439], 90.00th=[ 8160], 95.00th=[ 8848], 00:24:49.548 | 99.00th=[11600], 99.50th=[13566], 99.90th=[17171], 99.95th=[18220], 00:24:49.548 | 99.99th=[19792] 00:24:49.548 bw ( KiB/s): min=43264, max=47800, per=99.83%, avg=45790.00, stdev=1883.90, samples=4 00:24:49.548 iops : min=10816, max=11950, avg=11447.50, stdev=470.97, samples=4 00:24:49.548 write: IOPS=11.4k, BW=44.5MiB/s (46.6MB/s)(89.1MiB/2004msec); 0 zone resets 00:24:49.548 slat (nsec): min=1628, max=227883, avg=1811.57, stdev=1672.92 00:24:49.548 clat (usec): min=1587, max=17810, avg=4504.52, stdev=961.26 00:24:49.548 lat (usec): min=1589, max=17832, avg=4506.33, stdev=961.70 00:24:49.548 clat percentiles (usec): 00:24:49.548 | 1.00th=[ 2737], 5.00th=[ 3228], 10.00th=[ 3490], 20.00th=[ 3851], 00:24:49.548 | 30.00th=[ 4080], 40.00th=[ 4293], 50.00th=[ 4490], 60.00th=[ 4621], 00:24:49.548 | 70.00th=[ 4817], 80.00th=[ 5080], 90.00th=[ 5342], 95.00th=[ 5669], 00:24:49.548 | 99.00th=[ 7308], 99.50th=[ 8848], 99.90th=[15139], 99.95th=[16057], 00:24:49.548 | 99.99th=[17695] 00:24:49.548 bw ( KiB/s): min=43712, max=46856, per=100.00%, avg=45526.00, stdev=1433.86, samples=4 00:24:49.548 iops : min=10928, max=11714, avg=11381.50, stdev=358.46, samples=4 00:24:49.548 lat (msec) : 2=0.01%, 4=13.19%, 10=85.45%, 20=1.35% 00:24:49.548 cpu : usr=73.94%, sys=21.37%, ctx=13, majf=0, minf=5 00:24:49.548 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:49.548 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:49.548 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:49.548 issued rwts: total=22980,22805,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:49.548 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:49.548 00:24:49.548 Run status group 0 (all jobs): 00:24:49.548 READ: bw=44.8MiB/s (47.0MB/s), 44.8MiB/s-44.8MiB/s (47.0MB/s-47.0MB/s), io=89.8MiB (94.1MB), run=2004-2004msec 00:24:49.548 WRITE: bw=44.5MiB/s (46.6MB/s), 44.5MiB/s-44.5MiB/s (46.6MB/s-46.6MB/s), io=89.1MiB (93.4MB), run=2004-2004msec 00:24:49.548 11:13:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:49.548 11:13:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:49.548 11:13:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:49.548 11:13:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:49.548 11:13:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:49.548 11:13:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:49.548 11:13:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:49.548 11:13:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:49.548 11:13:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:49.548 11:13:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:49.548 11:13:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:49.548 11:13:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:49.548 11:13:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:49.548 11:13:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:49.548 11:13:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:49.548 11:13:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:49.548 11:13:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:49.548 11:13:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:49.548 11:13:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:49.548 11:13:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:49.548 11:13:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:49.548 11:13:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:49.548 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:49.548 fio-3.35 00:24:49.548 Starting 1 thread 00:24:49.548 EAL: No free 2048 kB hugepages reported on node 1 00:24:52.089 00:24:52.089 test: (groupid=0, jobs=1): err= 0: pid=1545197: Fri Jul 26 11:13:11 2024 00:24:52.089 read: IOPS=8585, BW=134MiB/s (141MB/s)(270MiB/2016msec) 00:24:52.089 slat (nsec): min=2562, max=90845, avg=2861.77, stdev=1412.64 00:24:52.089 clat (usec): min=3343, max=42274, avg=9248.68, stdev=3701.44 00:24:52.089 lat (usec): min=3346, max=42280, avg=9251.54, stdev=3701.89 00:24:52.089 clat percentiles (usec): 00:24:52.089 | 1.00th=[ 4490], 5.00th=[ 5342], 10.00th=[ 5997], 20.00th=[ 6849], 00:24:52.089 | 30.00th=[ 7504], 40.00th=[ 7963], 50.00th=[ 8455], 60.00th=[ 9110], 00:24:52.089 | 70.00th=[ 9896], 80.00th=[10683], 90.00th=[12387], 95.00th=[16057], 00:24:52.089 | 99.00th=[26346], 99.50th=[27919], 99.90th=[29230], 99.95th=[29754], 00:24:52.089 | 99.99th=[41681] 00:24:52.089 bw ( KiB/s): min=66144, max=74208, per=50.95%, avg=69992.00, stdev=4265.68, samples=4 00:24:52.089 iops : min= 4134, max= 4638, avg=4374.50, stdev=266.61, samples=4 00:24:52.089 write: IOPS=5008, BW=78.3MiB/s (82.1MB/s)(142MiB/1812msec); 0 zone resets 00:24:52.089 slat (usec): min=29, max=378, avg=32.01, stdev= 7.50 00:24:52.089 clat (usec): min=5785, max=36176, avg=9839.87, stdev=3641.70 00:24:52.089 lat (usec): min=5815, max=36215, avg=9871.88, stdev=3645.32 00:24:52.089 clat percentiles (usec): 00:24:52.089 | 1.00th=[ 6456], 5.00th=[ 6980], 10.00th=[ 7439], 20.00th=[ 7898], 00:24:52.089 | 30.00th=[ 8291], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9503], 00:24:52.089 | 70.00th=[ 9896], 80.00th=[10552], 90.00th=[11731], 95.00th=[14746], 00:24:52.089 | 99.00th=[30278], 99.50th=[30802], 99.90th=[33162], 99.95th=[33424], 00:24:52.089 | 99.99th=[36439] 00:24:52.089 bw ( KiB/s): min=68800, max=76544, per=90.60%, avg=72600.00, stdev=4335.45, samples=4 00:24:52.089 iops : min= 4300, max= 4784, avg=4537.50, stdev=270.97, samples=4 00:24:52.089 lat (msec) : 4=0.19%, 10=70.70%, 20=25.69%, 50=3.41% 00:24:52.089 cpu : usr=84.82%, sys=12.15%, ctx=69, majf=0, minf=2 00:24:52.089 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:24:52.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.089 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:52.089 issued rwts: total=17308,9075,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.089 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.089 00:24:52.089 Run status group 0 (all jobs): 00:24:52.089 READ: bw=134MiB/s (141MB/s), 134MiB/s-134MiB/s (141MB/s-141MB/s), io=270MiB (284MB), run=2016-2016msec 00:24:52.089 WRITE: bw=78.3MiB/s (82.1MB/s), 78.3MiB/s-78.3MiB/s (82.1MB/s-82.1MB/s), io=142MiB (149MB), run=1812-1812msec 00:24:52.089 11:13:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:52.089 11:13:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:52.089 11:13:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:52.089 11:13:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:52.089 11:13:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:52.089 11:13:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:52.089 11:13:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:24:52.089 11:13:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:52.089 11:13:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:24:52.089 11:13:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:52.089 11:13:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:52.089 rmmod nvme_tcp 00:24:52.089 rmmod nvme_fabrics 00:24:52.089 rmmod nvme_keyring 00:24:52.089 11:13:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:52.089 11:13:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:24:52.089 11:13:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:24:52.089 11:13:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1544032 ']' 00:24:52.089 11:13:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1544032 00:24:52.089 11:13:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 1544032 ']' 00:24:52.089 11:13:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 1544032 00:24:52.089 11:13:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:24:52.089 11:13:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:52.089 11:13:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1544032 00:24:52.089 11:13:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:52.089 11:13:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:52.089 11:13:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1544032' 00:24:52.089 killing process with pid 1544032 00:24:52.089 11:13:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 1544032 00:24:52.090 11:13:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 1544032 00:24:52.349 11:13:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:52.349 11:13:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:52.349 11:13:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:52.350 11:13:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:52.350 11:13:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:52.350 11:13:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.350 11:13:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:52.350 11:13:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.894 11:13:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:54.894 00:24:54.894 real 0m15.320s 00:24:54.894 user 0m47.246s 00:24:54.894 sys 0m5.895s 00:24:54.894 11:13:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:54.894 11:13:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.894 ************************************ 00:24:54.894 END TEST nvmf_fio_host 00:24:54.894 ************************************ 00:24:54.894 11:13:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:54.894 11:13:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:54.894 11:13:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:54.894 11:13:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.894 ************************************ 00:24:54.894 START TEST nvmf_failover 00:24:54.894 ************************************ 00:24:54.894 11:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:54.894 * Looking for test storage... 00:24:54.894 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:54.894 11:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:54.894 11:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:54.894 11:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:54.894 11:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:54.894 11:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:54.894 11:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:54.894 11:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:54.894 11:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:54.894 11:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:54.894 11:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:54.895 11:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:54.895 11:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:54.895 11:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:54.895 11:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:54.895 11:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:54.895 11:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:54.895 11:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:54.895 11:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:54.895 11:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:54.895 11:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:54.895 11:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:54.895 11:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:54.895 11:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.895 11:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.895 11:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.895 11:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:54.895 11:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.895 11:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:24:54.895 11:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:54.895 11:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:54.895 11:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:54.895 11:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:54.895 11:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:54.895 11:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:54.895 11:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:54.895 11:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:54.895 11:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:54.895 11:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:54.895 11:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:54.895 11:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:54.895 11:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:54.895 11:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:54.895 11:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:54.895 11:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:54.895 11:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:54.895 11:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:54.895 11:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.895 11:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:54.895 11:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.895 11:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:54.895 11:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:54.895 11:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:24:54.895 11:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:00.184 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:00.184 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:25:00.184 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:00.184 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:00.185 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:00.185 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:00.185 Found net devices under 0000:86:00.0: cvl_0_0 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:00.185 Found net devices under 0000:86:00.1: cvl_0_1 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:00.185 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:00.185 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:25:00.185 00:25:00.185 --- 10.0.0.2 ping statistics --- 00:25:00.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.185 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:00.185 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:00.185 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:25:00.185 00:25:00.185 --- 10.0.0.1 ping statistics --- 00:25:00.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.185 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:00.185 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:00.186 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:00.186 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:00.186 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:00.186 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:00.186 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:00.446 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:00.446 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:00.446 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:00.446 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:00.446 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1549043 00:25:00.446 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1549043 00:25:00.446 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:00.446 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1549043 ']' 00:25:00.446 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.446 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:00.446 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.446 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:00.446 11:13:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:00.446 [2024-07-26 11:13:19.735280] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:00.446 [2024-07-26 11:13:19.735327] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:00.446 EAL: No free 2048 kB hugepages reported on node 1 00:25:00.446 [2024-07-26 11:13:19.793047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:00.446 [2024-07-26 11:13:19.871808] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:00.447 [2024-07-26 11:13:19.871847] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:00.447 [2024-07-26 11:13:19.871854] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:00.447 [2024-07-26 11:13:19.871860] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:00.447 [2024-07-26 11:13:19.871866] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:00.447 [2024-07-26 11:13:19.871902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:00.447 [2024-07-26 11:13:19.871927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:00.447 [2024-07-26 11:13:19.871929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:01.387 11:13:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:01.387 11:13:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:25:01.387 11:13:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:01.387 11:13:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:01.387 11:13:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:01.387 11:13:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:01.387 11:13:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:01.387 [2024-07-26 11:13:20.744067] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:01.387 11:13:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:01.648 Malloc0 00:25:01.648 11:13:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:01.909 11:13:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:01.909 11:13:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:02.169 [2024-07-26 11:13:21.524807] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:02.169 11:13:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:02.428 [2024-07-26 11:13:21.709335] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:02.428 11:13:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:02.428 [2024-07-26 11:13:21.889923] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:02.428 11:13:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1549431 00:25:02.428 11:13:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:02.428 11:13:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:02.428 11:13:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1549431 /var/tmp/bdevperf.sock 00:25:02.428 11:13:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1549431 ']' 00:25:02.428 11:13:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:02.428 11:13:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:02.428 11:13:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:02.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:02.428 11:13:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:02.428 11:13:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:03.368 11:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:03.368 11:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:25:03.368 11:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:03.938 NVMe0n1 00:25:03.938 11:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:03.938 00:25:03.938 11:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:03.938 11:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1549659 00:25:03.938 11:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:05.391 11:13:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:05.391 [2024-07-26 11:13:24.596447] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1502f50 is same with the state(5) to be set 00:25:05.391 [2024-07-26 11:13:24.596520] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1502f50 is same with the state(5) to be set 00:25:05.391 [2024-07-26 11:13:24.596528] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1502f50 is same with the state(5) to be set 00:25:05.391 [2024-07-26 11:13:24.596535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1502f50 is same with the state(5) to be set 00:25:05.391 [2024-07-26 11:13:24.596541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1502f50 is same with the state(5) to be set 00:25:05.391 [2024-07-26 11:13:24.596547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1502f50 is same with the state(5) to be set 00:25:05.391 [2024-07-26 11:13:24.596553] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1502f50 is same with the state(5) to be set 00:25:05.391 [2024-07-26 11:13:24.596559] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1502f50 is same with the state(5) to be set 00:25:05.391 [2024-07-26 11:13:24.596564] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1502f50 is same with the state(5) to be set 00:25:05.391 [2024-07-26 11:13:24.596570] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1502f50 is same with the state(5) to be set 00:25:05.391 [2024-07-26 11:13:24.596575] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1502f50 is same with the state(5) to be set 00:25:05.391 [2024-07-26 11:13:24.596581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1502f50 is same with the state(5) to be set 00:25:05.391 [2024-07-26 11:13:24.596586] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1502f50 is same with the state(5) to be set 00:25:05.391 [2024-07-26 11:13:24.596592] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1502f50 is same with the state(5) to be set 00:25:05.391 [2024-07-26 11:13:24.596598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1502f50 is same with the state(5) to be set 00:25:05.391 [2024-07-26 11:13:24.596603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1502f50 is same with the state(5) to be set 00:25:05.391 [2024-07-26 11:13:24.596609] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1502f50 is same with the state(5) to be set 00:25:05.391 [2024-07-26 11:13:24.596619] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1502f50 is same with the state(5) to be set 00:25:05.391 [2024-07-26 11:13:24.596625] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1502f50 is same with the state(5) to be set 00:25:05.391 [2024-07-26 11:13:24.596631] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1502f50 is same with the state(5) to be set 00:25:05.391 [2024-07-26 11:13:24.596636] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1502f50 is same with the state(5) to be set 00:25:05.391 [2024-07-26 11:13:24.596647] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1502f50 is same with the state(5) to be set 00:25:05.391 [2024-07-26 11:13:24.596653] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1502f50 is same with the state(5) to be set 00:25:05.392 [2024-07-26 11:13:24.596659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1502f50 is same with the state(5) to be set 00:25:05.392 [2024-07-26 11:13:24.596665] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1502f50 is same with the state(5) to be set 00:25:05.392 [2024-07-26 11:13:24.596671] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1502f50 is same with the state(5) to be set 00:25:05.392 [2024-07-26 11:13:24.596676] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1502f50 is same with the state(5) to be set 00:25:05.392 [2024-07-26 11:13:24.596682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1502f50 is same with the state(5) to be set 00:25:05.392 [2024-07-26 11:13:24.596688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1502f50 is same with the state(5) to be set 00:25:05.392 [2024-07-26 11:13:24.596695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1502f50 is same with the state(5) to be set 00:25:05.392 [2024-07-26 11:13:24.596701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1502f50 is same with the state(5) to be set 00:25:05.392 11:13:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:08.685 11:13:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:08.685 00:25:08.685 11:13:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:08.946 [2024-07-26 11:13:28.197606] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197651] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197666] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197672] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197678] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197684] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197690] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197696] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197702] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197715] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197721] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197732] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197747] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197754] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197760] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197766] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197773] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197780] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197787] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197807] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197814] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197821] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197828] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197834] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197840] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197851] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197858] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197864] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197871] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197877] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197882] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197888] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197894] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197905] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197916] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197923] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197935] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197948] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197954] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197978] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197984] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197990] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.197996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.198003] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.198009] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.198016] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.198021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.198027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.198033] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.946 [2024-07-26 11:13:28.198039] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503d70 is same with the state(5) to be set 00:25:08.947 11:13:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:12.239 11:13:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:12.239 [2024-07-26 11:13:31.394497] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:12.239 11:13:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:13.179 11:13:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:13.179 [2024-07-26 11:13:32.610124] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.179 [2024-07-26 11:13:32.610174] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.179 [2024-07-26 11:13:32.610182] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.179 [2024-07-26 11:13:32.610188] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610200] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610206] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610218] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610224] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610230] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610236] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610242] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610248] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610272] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610278] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610284] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610290] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610297] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610323] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610336] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610345] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610351] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610357] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610362] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610375] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610382] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610389] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610395] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610408] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610421] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610427] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610434] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610447] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610454] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610459] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610467] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610480] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 [2024-07-26 11:13:32.610486] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bdb40 is same with the state(5) to be set 00:25:13.180 11:13:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1549659 00:25:19.818 0 00:25:19.818 11:13:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1549431 00:25:19.818 11:13:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1549431 ']' 00:25:19.818 11:13:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1549431 00:25:19.818 11:13:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:19.818 11:13:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:19.818 11:13:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1549431 00:25:19.818 11:13:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:19.818 11:13:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:19.818 11:13:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1549431' 00:25:19.818 killing process with pid 1549431 00:25:19.818 11:13:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1549431 00:25:19.818 11:13:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1549431 00:25:19.818 11:13:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:19.818 [2024-07-26 11:13:21.963618] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:19.818 [2024-07-26 11:13:21.963668] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1549431 ] 00:25:19.818 EAL: No free 2048 kB hugepages reported on node 1 00:25:19.818 [2024-07-26 11:13:22.017457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.818 [2024-07-26 11:13:22.092331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.818 Running I/O for 15 seconds... 00:25:19.818 [2024-07-26 11:13:24.597946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.818 [2024-07-26 11:13:24.597984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.818 [2024-07-26 11:13:24.598010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.818 [2024-07-26 11:13:24.598027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.818 [2024-07-26 11:13:24.598047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.818 [2024-07-26 11:13:24.598062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.818 [2024-07-26 11:13:24.598077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.818 [2024-07-26 11:13:24.598093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.818 [2024-07-26 11:13:24.598108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.818 [2024-07-26 11:13:24.598123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.818 [2024-07-26 11:13:24.598139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.818 [2024-07-26 11:13:24.598154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.818 [2024-07-26 11:13:24.598175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.818 [2024-07-26 11:13:24.598193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.818 [2024-07-26 11:13:24.598208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.818 [2024-07-26 11:13:24.598223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.818 [2024-07-26 11:13:24.598238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.818 [2024-07-26 11:13:24.598253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.818 [2024-07-26 11:13:24.598268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.818 [2024-07-26 11:13:24.598284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.818 [2024-07-26 11:13:24.598298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.818 [2024-07-26 11:13:24.598313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.818 [2024-07-26 11:13:24.598328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.818 [2024-07-26 11:13:24.598343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.818 [2024-07-26 11:13:24.598360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.818 [2024-07-26 11:13:24.598375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.818 [2024-07-26 11:13:24.598390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.818 [2024-07-26 11:13:24.598405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.818 [2024-07-26 11:13:24.598420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.818 [2024-07-26 11:13:24.598435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.818 [2024-07-26 11:13:24.598450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.818 [2024-07-26 11:13:24.598465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.818 [2024-07-26 11:13:24.598480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.818 [2024-07-26 11:13:24.598495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.818 [2024-07-26 11:13:24.598511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.818 [2024-07-26 11:13:24.598526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.818 [2024-07-26 11:13:24.598541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.818 [2024-07-26 11:13:24.598556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.818 [2024-07-26 11:13:24.598572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.818 [2024-07-26 11:13:24.598588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.818 [2024-07-26 11:13:24.598602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.818 [2024-07-26 11:13:24.598618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.818 [2024-07-26 11:13:24.598633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.818 [2024-07-26 11:13:24.598648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.818 [2024-07-26 11:13:24.598662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.818 [2024-07-26 11:13:24.598678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.818 [2024-07-26 11:13:24.598695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.818 [2024-07-26 11:13:24.598713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.818 [2024-07-26 11:13:24.598729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.818 [2024-07-26 11:13:24.598746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.818 [2024-07-26 11:13:24.598762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.818 [2024-07-26 11:13:24.598780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.818 [2024-07-26 11:13:24.598796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.818 [2024-07-26 11:13:24.598812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.818 [2024-07-26 11:13:24.598830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.818 [2024-07-26 11:13:24.598847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.818 [2024-07-26 11:13:24.598863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.818 [2024-07-26 11:13:24.598878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.818 [2024-07-26 11:13:24.598887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.819 [2024-07-26 11:13:24.598893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.598901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.819 [2024-07-26 11:13:24.598908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.598917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.819 [2024-07-26 11:13:24.598924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.598932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.819 [2024-07-26 11:13:24.598939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.598947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.819 [2024-07-26 11:13:24.598956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.598964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.819 [2024-07-26 11:13:24.598970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.598978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.819 [2024-07-26 11:13:24.598985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.598994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.819 [2024-07-26 11:13:24.599001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.819 [2024-07-26 11:13:24.599016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.819 [2024-07-26 11:13:24.599030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.819 [2024-07-26 11:13:24.599050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.819 [2024-07-26 11:13:24.599066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.819 [2024-07-26 11:13:24.599081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.819 [2024-07-26 11:13:24.599096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.819 [2024-07-26 11:13:24.599111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.819 [2024-07-26 11:13:24.599125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.819 [2024-07-26 11:13:24.599140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.819 [2024-07-26 11:13:24.599156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.819 [2024-07-26 11:13:24.599171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.819 [2024-07-26 11:13:24.599186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.819 [2024-07-26 11:13:24.599200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.819 [2024-07-26 11:13:24.599215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.819 [2024-07-26 11:13:24.599230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.819 [2024-07-26 11:13:24.599245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.819 [2024-07-26 11:13:24.599265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.819 [2024-07-26 11:13:24.599280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.819 [2024-07-26 11:13:24.599295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.819 [2024-07-26 11:13:24.599309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.819 [2024-07-26 11:13:24.599324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.819 [2024-07-26 11:13:24.599340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.819 [2024-07-26 11:13:24.599354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.819 [2024-07-26 11:13:24.599369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.819 [2024-07-26 11:13:24.599384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.819 [2024-07-26 11:13:24.599399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.819 [2024-07-26 11:13:24.599413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.819 [2024-07-26 11:13:24.599428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.819 [2024-07-26 11:13:24.599442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.819 [2024-07-26 11:13:24.599457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.819 [2024-07-26 11:13:24.599471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.819 [2024-07-26 11:13:24.599486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.819 [2024-07-26 11:13:24.599503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.819 [2024-07-26 11:13:24.599517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.819 [2024-07-26 11:13:24.599533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.819 [2024-07-26 11:13:24.599548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.819 [2024-07-26 11:13:24.599563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.819 [2024-07-26 11:13:24.599577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.819 [2024-07-26 11:13:24.599592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.819 [2024-07-26 11:13:24.599607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.819 [2024-07-26 11:13:24.599621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:96144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.819 [2024-07-26 11:13:24.599636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.819 [2024-07-26 11:13:24.599650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.819 [2024-07-26 11:13:24.599665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.819 [2024-07-26 11:13:24.599679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.819 [2024-07-26 11:13:24.599695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.819 [2024-07-26 11:13:24.599709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.819 [2024-07-26 11:13:24.599728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.819 [2024-07-26 11:13:24.599745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.819 [2024-07-26 11:13:24.599759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.819 [2024-07-26 11:13:24.599767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.820 [2024-07-26 11:13:24.599773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:24.599781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.820 [2024-07-26 11:13:24.599788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:24.599796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.820 [2024-07-26 11:13:24.599802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:24.599810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.820 [2024-07-26 11:13:24.599816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:24.599824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.820 [2024-07-26 11:13:24.599830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:24.599838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.820 [2024-07-26 11:13:24.599844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:24.599853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.820 [2024-07-26 11:13:24.599859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:24.599867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.820 [2024-07-26 11:13:24.599873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:24.599881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.820 [2024-07-26 11:13:24.599887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:24.599895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.820 [2024-07-26 11:13:24.599903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:24.599912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.820 [2024-07-26 11:13:24.599918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:24.599926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.820 [2024-07-26 11:13:24.599932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:24.599950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.820 [2024-07-26 11:13:24.599957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.820 [2024-07-26 11:13:24.599963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96768 len:8 PRP1 0x0 PRP2 0x0 00:25:19.820 [2024-07-26 11:13:24.599970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:24.600011] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8c54b0 was disconnected and freed. reset controller. 00:25:19.820 [2024-07-26 11:13:24.600022] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:19.820 [2024-07-26 11:13:24.600041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.820 [2024-07-26 11:13:24.600053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:24.600061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.820 [2024-07-26 11:13:24.600068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:24.600075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.820 [2024-07-26 11:13:24.600081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:24.600088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.820 [2024-07-26 11:13:24.600094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:24.600106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.820 [2024-07-26 11:13:24.602944] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.820 [2024-07-26 11:13:24.602972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d2540 (9): Bad file descriptor 00:25:19.820 [2024-07-26 11:13:24.644050] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:19.820 [2024-07-26 11:13:28.198983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.820 [2024-07-26 11:13:28.199019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:28.199033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:27960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.820 [2024-07-26 11:13:28.199047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:28.199062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:27968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.820 [2024-07-26 11:13:28.199070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:28.199080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.820 [2024-07-26 11:13:28.199088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:28.199099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:27984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.820 [2024-07-26 11:13:28.199108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:28.199116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.820 [2024-07-26 11:13:28.199123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:28.199132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:28000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.820 [2024-07-26 11:13:28.199138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:28.199146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:28008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.820 [2024-07-26 11:13:28.199152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:28.199161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:28016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.820 [2024-07-26 11:13:28.199168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:28.199176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:28024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.820 [2024-07-26 11:13:28.199183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:28.199192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:28032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.820 [2024-07-26 11:13:28.199199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:28.199207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:28040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.820 [2024-07-26 11:13:28.199214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:28.199223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:28048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.820 [2024-07-26 11:13:28.199230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:28.199238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:28056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.820 [2024-07-26 11:13:28.199246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:28.199254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:28064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.820 [2024-07-26 11:13:28.199263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:28.199272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:28072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.820 [2024-07-26 11:13:28.199279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:28.199292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:28080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.820 [2024-07-26 11:13:28.199300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:28.199308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:28088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.820 [2024-07-26 11:13:28.199316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:28.199324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:28096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.820 [2024-07-26 11:13:28.199331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:28.199340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:28104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.820 [2024-07-26 11:13:28.199347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:28.199355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:28112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.820 [2024-07-26 11:13:28.199362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:28.199370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.820 [2024-07-26 11:13:28.199377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:28.199386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.820 [2024-07-26 11:13:28.199393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:28.199402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:28136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.820 [2024-07-26 11:13:28.199409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:28.199418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:28144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.820 [2024-07-26 11:13:28.199425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:28.199433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:28152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.820 [2024-07-26 11:13:28.199440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:28.199449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.820 [2024-07-26 11:13:28.199455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:28.199465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:28168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.820 [2024-07-26 11:13:28.199472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:28.199481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:28176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.820 [2024-07-26 11:13:28.199487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:28.199497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:28184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.820 [2024-07-26 11:13:28.199504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:28.199512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:28192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.820 [2024-07-26 11:13:28.199519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.820 [2024-07-26 11:13:28.199527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.821 [2024-07-26 11:13:28.199534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.199542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:28208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.821 [2024-07-26 11:13:28.199549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.199557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:28216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.821 [2024-07-26 11:13:28.199564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.199573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.821 [2024-07-26 11:13:28.199580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.199588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:28232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.821 [2024-07-26 11:13:28.199595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.199604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:28240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.821 [2024-07-26 11:13:28.199611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.199619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.821 [2024-07-26 11:13:28.199626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.199634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.821 [2024-07-26 11:13:28.199641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.199649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:28264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.821 [2024-07-26 11:13:28.199656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.199667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:28272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.821 [2024-07-26 11:13:28.199674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.199682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.821 [2024-07-26 11:13:28.199689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.199698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:28288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.821 [2024-07-26 11:13:28.199705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.199712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.821 [2024-07-26 11:13:28.199719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.199728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:28304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.821 [2024-07-26 11:13:28.199735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.199743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:28312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.821 [2024-07-26 11:13:28.199750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.199759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:28320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.821 [2024-07-26 11:13:28.199765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.199773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:28328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.821 [2024-07-26 11:13:28.199780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.199789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:28408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.821 [2024-07-26 11:13:28.199796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.199804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:28416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.821 [2024-07-26 11:13:28.199810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.199819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:28424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.821 [2024-07-26 11:13:28.199826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.199835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:28432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.821 [2024-07-26 11:13:28.199842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.199850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:28440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.821 [2024-07-26 11:13:28.199858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.199866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:28448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.821 [2024-07-26 11:13:28.199873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.199881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:28456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.821 [2024-07-26 11:13:28.199888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.199896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.821 [2024-07-26 11:13:28.199903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.199912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:28344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.821 [2024-07-26 11:13:28.199919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.199927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:28352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.821 [2024-07-26 11:13:28.199933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.199942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:28360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.821 [2024-07-26 11:13:28.199948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.199957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:28368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.821 [2024-07-26 11:13:28.199964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.199973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:28376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.821 [2024-07-26 11:13:28.199980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.199988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:28384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.821 [2024-07-26 11:13:28.199995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.200003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:28392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.821 [2024-07-26 11:13:28.200010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.200018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:28400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.821 [2024-07-26 11:13:28.200024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.200033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:28464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.821 [2024-07-26 11:13:28.200039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.200054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:28472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.821 [2024-07-26 11:13:28.200061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.200070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:28480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.821 [2024-07-26 11:13:28.200077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.200085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:28488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.821 [2024-07-26 11:13:28.200091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.200099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:28496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.821 [2024-07-26 11:13:28.200106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.200115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:28504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.821 [2024-07-26 11:13:28.200121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.200130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:28512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.821 [2024-07-26 11:13:28.200136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.200144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:28520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.821 [2024-07-26 11:13:28.200151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.200159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:28528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.821 [2024-07-26 11:13:28.200166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.200174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:28536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.821 [2024-07-26 11:13:28.200181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.200189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:28544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.821 [2024-07-26 11:13:28.200196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.200204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:28552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.821 [2024-07-26 11:13:28.200211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.200219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:28560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.821 [2024-07-26 11:13:28.200226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.200235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:28568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.821 [2024-07-26 11:13:28.200244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.200253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:28576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.821 [2024-07-26 11:13:28.200260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.821 [2024-07-26 11:13:28.200269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:28584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.822 [2024-07-26 11:13:28.200276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.200284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:28592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.822 [2024-07-26 11:13:28.200291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.200299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:28600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.822 [2024-07-26 11:13:28.200305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.200314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:28608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.822 [2024-07-26 11:13:28.200321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.200329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:28616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.822 [2024-07-26 11:13:28.200336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.200344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:28624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.822 [2024-07-26 11:13:28.200350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.200359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:28632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.822 [2024-07-26 11:13:28.200365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.200374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:28640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.822 [2024-07-26 11:13:28.200380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.200389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:28648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.822 [2024-07-26 11:13:28.200395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.200405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.822 [2024-07-26 11:13:28.200412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.200420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:28664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.822 [2024-07-26 11:13:28.200427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.200435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:28672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.822 [2024-07-26 11:13:28.200444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.200453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:28680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.822 [2024-07-26 11:13:28.200460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.200467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:28688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.822 [2024-07-26 11:13:28.200474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.200483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:28696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.822 [2024-07-26 11:13:28.200489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.200497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:28704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.822 [2024-07-26 11:13:28.200504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.200512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:28712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.822 [2024-07-26 11:13:28.200519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.200527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:28720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.822 [2024-07-26 11:13:28.200534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.200542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:28728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.822 [2024-07-26 11:13:28.200549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.200557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.822 [2024-07-26 11:13:28.200564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.200572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:28744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.822 [2024-07-26 11:13:28.200579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.200588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:28752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.822 [2024-07-26 11:13:28.200595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.200603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:28760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.822 [2024-07-26 11:13:28.200609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.200618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.822 [2024-07-26 11:13:28.200624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.200633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:28776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.822 [2024-07-26 11:13:28.200641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.200649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:28784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.822 [2024-07-26 11:13:28.200656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.200665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:28792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.822 [2024-07-26 11:13:28.200672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.200680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:28800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.822 [2024-07-26 11:13:28.200687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.200695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:28808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.822 [2024-07-26 11:13:28.200702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.200710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.822 [2024-07-26 11:13:28.200716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.200725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:28824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.822 [2024-07-26 11:13:28.200731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.200739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:28832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.822 [2024-07-26 11:13:28.200746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.200754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:28840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.822 [2024-07-26 11:13:28.200761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.200783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.822 [2024-07-26 11:13:28.200791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28848 len:8 PRP1 0x0 PRP2 0x0 00:25:19.822 [2024-07-26 11:13:28.200798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.200807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.822 [2024-07-26 11:13:28.200813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.822 [2024-07-26 11:13:28.200819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28856 len:8 PRP1 0x0 PRP2 0x0 00:25:19.822 [2024-07-26 11:13:28.200826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.200833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.822 [2024-07-26 11:13:28.200838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.822 [2024-07-26 11:13:28.200845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28864 len:8 PRP1 0x0 PRP2 0x0 00:25:19.822 [2024-07-26 11:13:28.200852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.200859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.822 [2024-07-26 11:13:28.200864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.822 [2024-07-26 11:13:28.200870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28872 len:8 PRP1 0x0 PRP2 0x0 00:25:19.822 [2024-07-26 11:13:28.200876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.200883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.822 [2024-07-26 11:13:28.200889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.822 [2024-07-26 11:13:28.200895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28880 len:8 PRP1 0x0 PRP2 0x0 00:25:19.822 [2024-07-26 11:13:28.200902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.200909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.822 [2024-07-26 11:13:28.200915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.822 [2024-07-26 11:13:28.200921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28888 len:8 PRP1 0x0 PRP2 0x0 00:25:19.822 [2024-07-26 11:13:28.200928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.200935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.822 [2024-07-26 11:13:28.200940] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.822 [2024-07-26 11:13:28.200946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28896 len:8 PRP1 0x0 PRP2 0x0 00:25:19.822 [2024-07-26 11:13:28.200952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.200959] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.822 [2024-07-26 11:13:28.200964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.822 [2024-07-26 11:13:28.200970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28904 len:8 PRP1 0x0 PRP2 0x0 00:25:19.822 [2024-07-26 11:13:28.200976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.200984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.822 [2024-07-26 11:13:28.200989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.822 [2024-07-26 11:13:28.200995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28912 len:8 PRP1 0x0 PRP2 0x0 00:25:19.822 [2024-07-26 11:13:28.201001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.201008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.822 [2024-07-26 11:13:28.201013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.822 [2024-07-26 11:13:28.201018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28920 len:8 PRP1 0x0 PRP2 0x0 00:25:19.822 [2024-07-26 11:13:28.201025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.201037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.822 [2024-07-26 11:13:28.201049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.822 [2024-07-26 11:13:28.201056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28928 len:8 PRP1 0x0 PRP2 0x0 00:25:19.822 [2024-07-26 11:13:28.201062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.201069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.822 [2024-07-26 11:13:28.201074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.822 [2024-07-26 11:13:28.201081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28936 len:8 PRP1 0x0 PRP2 0x0 00:25:19.822 [2024-07-26 11:13:28.201088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.201096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.822 [2024-07-26 11:13:28.201102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.822 [2024-07-26 11:13:28.201109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28944 len:8 PRP1 0x0 PRP2 0x0 00:25:19.822 [2024-07-26 11:13:28.201117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.201124] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.822 [2024-07-26 11:13:28.201129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.822 [2024-07-26 11:13:28.201135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28952 len:8 PRP1 0x0 PRP2 0x0 00:25:19.822 [2024-07-26 11:13:28.201142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.822 [2024-07-26 11:13:28.201149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.823 [2024-07-26 11:13:28.201154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.823 [2024-07-26 11:13:28.201160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28960 len:8 PRP1 0x0 PRP2 0x0 00:25:19.823 [2024-07-26 11:13:28.201167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:28.201174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.823 [2024-07-26 11:13:28.201180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.823 [2024-07-26 11:13:28.201185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28968 len:8 PRP1 0x0 PRP2 0x0 00:25:19.823 [2024-07-26 11:13:28.201192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:28.201230] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8f63f0 was disconnected and freed. reset controller. 00:25:19.823 [2024-07-26 11:13:28.201240] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:19.823 [2024-07-26 11:13:28.201260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.823 [2024-07-26 11:13:28.201268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:28.201276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.823 [2024-07-26 11:13:28.201283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:28.201292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.823 [2024-07-26 11:13:28.201299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:28.201308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.823 [2024-07-26 11:13:28.201315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:28.201322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.823 [2024-07-26 11:13:28.201353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d2540 (9): Bad file descriptor 00:25:19.823 [2024-07-26 11:13:28.204193] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.823 [2024-07-26 11:13:28.325598] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:19.823 [2024-07-26 11:13:32.610711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:54848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.610750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.610765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:54856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.610773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.610782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:54864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.610789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.610798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:54872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.610805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.610813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:54880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.610820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.610828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:54888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.610834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.610842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:54896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.610849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.610857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:54904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.610863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.610872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:54912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.610879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.610890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:54920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.610897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.610907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:54928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.610913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.610921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:54936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.610928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.610937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:54944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.610944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.610957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:54952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.610964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.610972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.610979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.610987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:54968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.610994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.611002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:54976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.611009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.611017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:54984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.611024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.611032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:54992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.611040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.611054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:55000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.611060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.611069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:55008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.611076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.611084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:55016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.611092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.611101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:55024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.611108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.611116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:55032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.611122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.611131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:55040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.611137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.611145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:55048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.611152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.611160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:55056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.611167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.611174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:55064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.611181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.611189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:55072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.611196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.611205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:55080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.611212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.611221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:55088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.611227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.611235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:55096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.611242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.611250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:55104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.611262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.611270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:55112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.611277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.611285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:55120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.611293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.611302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:55128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.611308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.611316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:55136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.611323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.611331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:55144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.611337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.611345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:55152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.611352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.611360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:55160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.611367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.611375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:55168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.611381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.611390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:55176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.611396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.611405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:55184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.611412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.611420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:55192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.611427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.611435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:55200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.611441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.611450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:55208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.611456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.611464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:55216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.823 [2024-07-26 11:13:32.611471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.823 [2024-07-26 11:13:32.611480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:55224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.824 [2024-07-26 11:13:32.611487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.611495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:55232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.824 [2024-07-26 11:13:32.611503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.611511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:55240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.824 [2024-07-26 11:13:32.611518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.611527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:55248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.824 [2024-07-26 11:13:32.611533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.611541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:55256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.824 [2024-07-26 11:13:32.611548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.611556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:55264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.824 [2024-07-26 11:13:32.611563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.611571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:55272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.824 [2024-07-26 11:13:32.611577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.611585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:55280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.824 [2024-07-26 11:13:32.611592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.611600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:55288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.824 [2024-07-26 11:13:32.611608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.611617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.824 [2024-07-26 11:13:32.611623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.611631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:55304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.824 [2024-07-26 11:13:32.611637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.611645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:55312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.824 [2024-07-26 11:13:32.611652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.611660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:55320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.824 [2024-07-26 11:13:32.611669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.611677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:55328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.824 [2024-07-26 11:13:32.611683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.611691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:55336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.824 [2024-07-26 11:13:32.611698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.611706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:55344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.824 [2024-07-26 11:13:32.611713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.611721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:55352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.824 [2024-07-26 11:13:32.611728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.611736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:55360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.824 [2024-07-26 11:13:32.611743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.611752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:55368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.824 [2024-07-26 11:13:32.611760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.611768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.824 [2024-07-26 11:13:32.611775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.611782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:55384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.824 [2024-07-26 11:13:32.611788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.611796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:55392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.824 [2024-07-26 11:13:32.611803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.611811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:55400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.824 [2024-07-26 11:13:32.611817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.611825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:55408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.824 [2024-07-26 11:13:32.611832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.611840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:55416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.824 [2024-07-26 11:13:32.611846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.611855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:55424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.824 [2024-07-26 11:13:32.611862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.611870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:55432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.824 [2024-07-26 11:13:32.611876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.611884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:55440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.824 [2024-07-26 11:13:32.611890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.611898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:55448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.824 [2024-07-26 11:13:32.611904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.611913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:55456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.824 [2024-07-26 11:13:32.611919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.611927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:55464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.824 [2024-07-26 11:13:32.611933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.611941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:55472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.824 [2024-07-26 11:13:32.611947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.611955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:55480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.824 [2024-07-26 11:13:32.611962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.611970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:55488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.824 [2024-07-26 11:13:32.611978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.611986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:55496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.824 [2024-07-26 11:13:32.611992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.612000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:55504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.824 [2024-07-26 11:13:32.612007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.612015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:55512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.824 [2024-07-26 11:13:32.612022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.612030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:55520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.824 [2024-07-26 11:13:32.612037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.612049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:55528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.824 [2024-07-26 11:13:32.612056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.612065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:55536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.824 [2024-07-26 11:13:32.612071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.612079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:55544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.824 [2024-07-26 11:13:32.612087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.612096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:55552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.824 [2024-07-26 11:13:32.612102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.612110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:55560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.824 [2024-07-26 11:13:32.612117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.612125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:55568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.824 [2024-07-26 11:13:32.612132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.612140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:55576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.824 [2024-07-26 11:13:32.612146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.612154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:55584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.824 [2024-07-26 11:13:32.612161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.612170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:55592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.824 [2024-07-26 11:13:32.612177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.612185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:55600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.824 [2024-07-26 11:13:32.612191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.612199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:55608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.824 [2024-07-26 11:13:32.612205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.612214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:55616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.824 [2024-07-26 11:13:32.612222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.612230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:55624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.824 [2024-07-26 11:13:32.612238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.612246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:55632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.824 [2024-07-26 11:13:32.612252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.824 [2024-07-26 11:13:32.612260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:55640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.824 [2024-07-26 11:13:32.612267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.825 [2024-07-26 11:13:32.612274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:55648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.825 [2024-07-26 11:13:32.612281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.825 [2024-07-26 11:13:32.612289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:55656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.825 [2024-07-26 11:13:32.612295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.825 [2024-07-26 11:13:32.612303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:55664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.825 [2024-07-26 11:13:32.612309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.825 [2024-07-26 11:13:32.612317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:55672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.825 [2024-07-26 11:13:32.612324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.825 [2024-07-26 11:13:32.612332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:55680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.825 [2024-07-26 11:13:32.612338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.825 [2024-07-26 11:13:32.612346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:55688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.825 [2024-07-26 11:13:32.612352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.825 [2024-07-26 11:13:32.612360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:55696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.825 [2024-07-26 11:13:32.612366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.825 [2024-07-26 11:13:32.612374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:55704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.825 [2024-07-26 11:13:32.612381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.825 [2024-07-26 11:13:32.612388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.825 [2024-07-26 11:13:32.612394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.825 [2024-07-26 11:13:32.612402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:55720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.825 [2024-07-26 11:13:32.612409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.825 [2024-07-26 11:13:32.612419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:55728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.825 [2024-07-26 11:13:32.612426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.825 [2024-07-26 11:13:32.612434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:55736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.825 [2024-07-26 11:13:32.612440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.825 [2024-07-26 11:13:32.612448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:55744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.825 [2024-07-26 11:13:32.612456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.825 [2024-07-26 11:13:32.612464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:55752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.825 [2024-07-26 11:13:32.612471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.825 [2024-07-26 11:13:32.612479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:55760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.825 [2024-07-26 11:13:32.612485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.825 [2024-07-26 11:13:32.612495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:55768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.825 [2024-07-26 11:13:32.612502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.825 [2024-07-26 11:13:32.612510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:55776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.825 [2024-07-26 11:13:32.612516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.825 [2024-07-26 11:13:32.612524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:55784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.825 [2024-07-26 11:13:32.612532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.825 [2024-07-26 11:13:32.612540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:55792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.825 [2024-07-26 11:13:32.612546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.825 [2024-07-26 11:13:32.612554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:55800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.825 [2024-07-26 11:13:32.612560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.825 [2024-07-26 11:13:32.612568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:55808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.825 [2024-07-26 11:13:32.612575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.825 [2024-07-26 11:13:32.612583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:55816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.825 [2024-07-26 11:13:32.612589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.825 [2024-07-26 11:13:32.612597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:55824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.825 [2024-07-26 11:13:32.612607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.825 [2024-07-26 11:13:32.612615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:55832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.825 [2024-07-26 11:13:32.612622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.825 [2024-07-26 11:13:32.612630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:55840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.825 [2024-07-26 11:13:32.612636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.825 [2024-07-26 11:13:32.612644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:55848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.825 [2024-07-26 11:13:32.612651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.825 [2024-07-26 11:13:32.612658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:55856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.825 [2024-07-26 11:13:32.612665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.825 [2024-07-26 11:13:32.612684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.825 [2024-07-26 11:13:32.612691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.825 [2024-07-26 11:13:32.612697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55864 len:8 PRP1 0x0 PRP2 0x0 00:25:19.825 [2024-07-26 11:13:32.612706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.825 [2024-07-26 11:13:32.612749] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8f60b0 was disconnected and freed. reset controller. 00:25:19.825 [2024-07-26 11:13:32.612759] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:19.825 [2024-07-26 11:13:32.612778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.825 [2024-07-26 11:13:32.612786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.825 [2024-07-26 11:13:32.612794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.825 [2024-07-26 11:13:32.612800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.825 [2024-07-26 11:13:32.612807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.825 [2024-07-26 11:13:32.612814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.825 [2024-07-26 11:13:32.612821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.825 [2024-07-26 11:13:32.612829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.825 [2024-07-26 11:13:32.612836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:19.825 [2024-07-26 11:13:32.615702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:19.825 [2024-07-26 11:13:32.615733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d2540 (9): Bad file descriptor 00:25:19.825 [2024-07-26 11:13:32.693610] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:19.825 00:25:19.825 Latency(us) 00:25:19.825 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:19.825 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:19.825 Verification LBA range: start 0x0 length 0x4000 00:25:19.825 NVMe0n1 : 15.01 10790.99 42.15 747.75 0.00 11070.20 1410.45 52656.75 00:25:19.825 =================================================================================================================== 00:25:19.825 Total : 10790.99 42.15 747.75 0.00 11070.20 1410.45 52656.75 00:25:19.825 Received shutdown signal, test time was about 15.000000 seconds 00:25:19.825 00:25:19.825 Latency(us) 00:25:19.825 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:19.825 =================================================================================================================== 00:25:19.825 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:19.825 11:13:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:19.825 11:13:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:19.825 11:13:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:19.825 11:13:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1552183 00:25:19.825 11:13:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1552183 /var/tmp/bdevperf.sock 00:25:19.825 11:13:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:19.825 11:13:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1552183 ']' 00:25:19.825 11:13:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:19.825 11:13:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:19.825 11:13:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:19.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:19.825 11:13:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:19.825 11:13:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:20.394 11:13:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:20.394 11:13:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:25:20.394 11:13:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:20.394 [2024-07-26 11:13:39.827964] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:20.394 11:13:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:20.653 [2024-07-26 11:13:40.004490] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:20.653 11:13:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:20.912 NVMe0n1 00:25:20.912 11:13:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:21.172 00:25:21.172 11:13:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:21.432 00:25:21.692 11:13:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:21.692 11:13:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:21.692 11:13:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:21.951 11:13:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:25.247 11:13:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:25.247 11:13:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:25.247 11:13:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1553112 00:25:25.247 11:13:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:25.247 11:13:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1553112 00:25:26.184 0 00:25:26.184 11:13:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:26.184 [2024-07-26 11:13:38.849334] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:26.184 [2024-07-26 11:13:38.849386] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1552183 ] 00:25:26.184 EAL: No free 2048 kB hugepages reported on node 1 00:25:26.184 [2024-07-26 11:13:38.903812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.184 [2024-07-26 11:13:38.973271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:26.184 [2024-07-26 11:13:41.298778] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:26.184 [2024-07-26 11:13:41.298830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:26.184 [2024-07-26 11:13:41.298841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.184 [2024-07-26 11:13:41.298850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:26.184 [2024-07-26 11:13:41.298857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.184 [2024-07-26 11:13:41.298865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:26.184 [2024-07-26 11:13:41.298872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.184 [2024-07-26 11:13:41.298879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:26.184 [2024-07-26 11:13:41.298886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:26.184 [2024-07-26 11:13:41.298892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:26.184 [2024-07-26 11:13:41.298917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:26.184 [2024-07-26 11:13:41.298930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b00540 (9): Bad file descriptor 00:25:26.184 [2024-07-26 11:13:41.348862] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:26.184 Running I/O for 1 seconds... 00:25:26.184 00:25:26.184 Latency(us) 00:25:26.184 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:26.184 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:26.184 Verification LBA range: start 0x0 length 0x4000 00:25:26.184 NVMe0n1 : 1.01 10356.16 40.45 0.00 0.00 12312.08 2450.48 29861.62 00:25:26.184 =================================================================================================================== 00:25:26.184 Total : 10356.16 40.45 0.00 0.00 12312.08 2450.48 29861.62 00:25:26.184 11:13:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:26.184 11:13:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:26.445 11:13:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:26.705 11:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:26.705 11:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:26.705 11:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:26.965 11:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:30.295 11:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:30.295 11:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:30.295 11:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1552183 00:25:30.295 11:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1552183 ']' 00:25:30.295 11:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1552183 00:25:30.295 11:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:30.295 11:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:30.295 11:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1552183 00:25:30.295 11:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:30.295 11:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:30.295 11:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1552183' 00:25:30.295 killing process with pid 1552183 00:25:30.295 11:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1552183 00:25:30.295 11:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1552183 00:25:30.554 11:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:30.554 11:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:30.554 11:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:30.554 11:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:30.554 11:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:30.554 11:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:30.554 11:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:25:30.554 11:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:30.554 11:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:25:30.554 11:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:30.554 11:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:30.554 rmmod nvme_tcp 00:25:30.554 rmmod nvme_fabrics 00:25:30.554 rmmod nvme_keyring 00:25:30.554 11:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:30.554 11:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:25:30.554 11:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:25:30.554 11:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1549043 ']' 00:25:30.554 11:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1549043 00:25:30.554 11:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1549043 ']' 00:25:30.554 11:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1549043 00:25:30.554 11:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:30.814 11:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:30.814 11:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1549043 00:25:30.814 11:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:30.814 11:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:30.814 11:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1549043' 00:25:30.814 killing process with pid 1549043 00:25:30.814 11:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1549043 00:25:30.814 11:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1549043 00:25:30.814 11:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:30.814 11:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:30.814 11:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:30.814 11:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:30.814 11:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:30.814 11:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:30.814 11:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:30.814 11:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.354 11:13:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:33.354 00:25:33.354 real 0m38.469s 00:25:33.354 user 2m3.405s 00:25:33.354 sys 0m7.630s 00:25:33.354 11:13:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:33.354 11:13:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:33.354 ************************************ 00:25:33.354 END TEST nvmf_failover 00:25:33.354 ************************************ 00:25:33.354 11:13:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:33.354 11:13:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:33.354 11:13:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:33.354 11:13:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.354 ************************************ 00:25:33.354 START TEST nvmf_host_discovery 00:25:33.354 ************************************ 00:25:33.354 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:33.354 * Looking for test storage... 00:25:33.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:33.354 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:33.354 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:33.354 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:33.354 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:33.354 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:33.354 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:33.354 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:33.354 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:33.354 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:33.354 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:33.354 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:33.354 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:33.354 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:33.354 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:33.354 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:33.354 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:33.354 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:33.354 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:33.354 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:33.354 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:33.355 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:33.355 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:33.355 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.355 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.355 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.355 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:33.355 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.355 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:25:33.355 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:33.355 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:33.355 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:33.355 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:33.355 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:33.355 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:33.355 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:33.355 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:33.355 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:33.355 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:33.355 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:33.355 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:33.355 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:33.355 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:33.355 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:33.355 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:33.355 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:33.355 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:33.355 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:33.355 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:33.355 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:33.355 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:33.355 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.355 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:33.355 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:33.355 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:25:33.355 11:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:38.641 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:38.641 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:38.641 Found net devices under 0000:86:00.0: cvl_0_0 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:38.641 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:38.642 Found net devices under 0000:86:00.1: cvl_0_1 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:38.642 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:38.642 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:25:38.642 00:25:38.642 --- 10.0.0.2 ping statistics --- 00:25:38.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.642 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:38.642 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:38.642 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:25:38.642 00:25:38.642 --- 10.0.0.1 ping statistics --- 00:25:38.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.642 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1557353 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1557353 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1557353 ']' 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:38.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:38.642 11:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.642 [2024-07-26 11:13:57.730118] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:38.642 [2024-07-26 11:13:57.730166] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:38.642 EAL: No free 2048 kB hugepages reported on node 1 00:25:38.642 [2024-07-26 11:13:57.787986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.642 [2024-07-26 11:13:57.865846] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:38.642 [2024-07-26 11:13:57.865885] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:38.642 [2024-07-26 11:13:57.865891] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:38.642 [2024-07-26 11:13:57.865897] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:38.642 [2024-07-26 11:13:57.865901] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:38.642 [2024-07-26 11:13:57.865940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:39.212 11:13:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:39.212 11:13:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:25:39.212 11:13:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:39.212 11:13:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:39.212 11:13:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.212 11:13:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:39.212 11:13:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:39.212 11:13:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.212 11:13:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.212 [2024-07-26 11:13:58.568511] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:39.212 11:13:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.212 11:13:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:39.212 11:13:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.212 11:13:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.212 [2024-07-26 11:13:58.580683] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:39.212 11:13:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.212 11:13:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:39.212 11:13:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.212 11:13:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.212 null0 00:25:39.212 11:13:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.212 11:13:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:39.213 11:13:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.213 11:13:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.213 null1 00:25:39.213 11:13:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.213 11:13:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:39.213 11:13:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.213 11:13:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.213 11:13:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.213 11:13:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1557578 00:25:39.213 11:13:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:39.213 11:13:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1557578 /tmp/host.sock 00:25:39.213 11:13:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1557578 ']' 00:25:39.213 11:13:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:25:39.213 11:13:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:39.213 11:13:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:39.213 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:39.213 11:13:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:39.213 11:13:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.213 [2024-07-26 11:13:58.655026] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:39.213 [2024-07-26 11:13:58.655074] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1557578 ] 00:25:39.213 EAL: No free 2048 kB hugepages reported on node 1 00:25:39.213 [2024-07-26 11:13:58.707774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.473 [2024-07-26 11:13:58.781509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.043 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:40.043 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:25:40.043 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:40.043 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:40.043 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.043 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.043 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.043 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:40.043 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.043 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.043 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.043 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:40.043 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:40.043 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:40.043 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:40.043 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.043 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:40.043 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.043 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:40.043 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.043 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:40.043 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:40.043 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:40.043 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:40.043 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.043 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:40.043 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.043 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:40.043 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.303 [2024-07-26 11:13:59.787851] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:40.303 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:40.562 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.562 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:40.562 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:40.562 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:40.562 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:40.562 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:40.562 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.562 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:40.562 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.562 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.562 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:40.562 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:40.562 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:40.562 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:40.562 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:40.562 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:40.563 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:40.563 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:40.563 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:40.563 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:40.563 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:40.563 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.563 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.563 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.563 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:40.563 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:40.563 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:40.563 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:40.563 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:40.563 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.563 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.563 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.563 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:40.563 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:40.563 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:40.563 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:40.563 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:40.563 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:40.563 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:40.563 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.563 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.563 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:40.563 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:40.563 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:40.563 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.563 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:25:40.563 11:13:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:25:41.129 [2024-07-26 11:14:00.479795] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:41.129 [2024-07-26 11:14:00.479819] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:41.129 [2024-07-26 11:14:00.479834] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:41.129 [2024-07-26 11:14:00.568119] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:41.387 [2024-07-26 11:14:00.672518] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:41.387 [2024-07-26 11:14:00.672538] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:41.645 11:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:41.645 11:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:41.645 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:41.645 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:41.645 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:41.645 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.645 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:41.645 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.645 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:41.645 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.645 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.645 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:41.645 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:41.645 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:41.645 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:41.645 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:41.645 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:41.645 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:41.645 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:41.645 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:41.645 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:41.645 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.645 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:41.646 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.646 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.646 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:41.646 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:41.646 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:41.646 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:41.646 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:41.646 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:41.646 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:41.646 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:41.646 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:41.646 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:41.646 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.646 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:41.646 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.646 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:41.646 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.904 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:25:41.904 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:41.904 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:41.904 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:41.904 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:41.904 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:41.904 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:41.904 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:41.904 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:41.904 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:41.904 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:41.904 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:41.904 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.904 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.904 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.904 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:41.904 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:41.904 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:41.904 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:41.904 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:41.904 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.904 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.904 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.904 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:41.904 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:41.904 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:41.904 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:41.904 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:41.904 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:41.904 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:41.904 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:41.904 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.904 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:41.904 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.904 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:42.162 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.162 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:42.162 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:42.162 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:42.162 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:42.162 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:42.162 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:42.162 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:42.162 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:42.162 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:42.162 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:42.162 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:42.162 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:42.162 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.162 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.162 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.162 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:42.162 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:42.162 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:42.162 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:42.162 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:42.162 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.162 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.162 [2024-07-26 11:14:01.496559] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:42.162 [2024-07-26 11:14:01.497867] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:42.162 [2024-07-26 11:14:01.497890] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:42.162 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.163 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:42.163 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:42.163 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:42.163 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:42.163 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:42.163 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:42.163 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:42.163 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:42.163 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.163 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:42.163 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.163 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:42.163 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.163 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.163 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:42.163 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:42.163 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:42.163 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:42.163 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:42.163 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:42.163 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:42.163 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.163 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.163 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.163 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:42.163 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:42.163 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:42.163 [2024-07-26 11:14:01.586490] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:42.163 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.163 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:42.163 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:42.163 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:42.163 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:42.163 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:42.163 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:42.163 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:42.163 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:42.163 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:42.163 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:42.163 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.163 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:42.163 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.163 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:42.163 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.163 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:42.163 11:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:25:42.421 [2024-07-26 11:14:01.892096] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:42.421 [2024-07-26 11:14:01.892115] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:42.421 [2024-07-26 11:14:01.892121] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.357 [2024-07-26 11:14:02.756945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:43.357 [2024-07-26 11:14:02.756970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.357 [2024-07-26 11:14:02.756979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:43.357 [2024-07-26 11:14:02.756986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.357 [2024-07-26 11:14:02.756993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:43.357 [2024-07-26 11:14:02.757000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.357 [2024-07-26 11:14:02.757007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:43.357 [2024-07-26 11:14:02.757013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:43.357 [2024-07-26 11:14:02.757021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fcf30 is same with the state(5) to be set 00:25:43.357 [2024-07-26 11:14:02.757266] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:43.357 [2024-07-26 11:14:02.757279] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.357 [2024-07-26 11:14:02.766955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fcf30 (9): Bad file descriptor 00:25:43.357 [2024-07-26 11:14:02.776993] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:43.357 [2024-07-26 11:14:02.777556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.357 [2024-07-26 11:14:02.777571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fcf30 with addr=10.0.0.2, port=4420 00:25:43.357 [2024-07-26 11:14:02.777578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fcf30 is same with the state(5) to be set 00:25:43.357 [2024-07-26 11:14:02.777591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fcf30 (9): Bad file descriptor 00:25:43.357 [2024-07-26 11:14:02.777614] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:43.357 [2024-07-26 11:14:02.777622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:43.357 [2024-07-26 11:14:02.777630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:43.357 [2024-07-26 11:14:02.777641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.357 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.357 [2024-07-26 11:14:02.787050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:43.357 [2024-07-26 11:14:02.787535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.357 [2024-07-26 11:14:02.787547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fcf30 with addr=10.0.0.2, port=4420 00:25:43.357 [2024-07-26 11:14:02.787554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fcf30 is same with the state(5) to be set 00:25:43.357 [2024-07-26 11:14:02.787564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fcf30 (9): Bad file descriptor 00:25:43.357 [2024-07-26 11:14:02.787573] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:43.357 [2024-07-26 11:14:02.787579] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:43.357 [2024-07-26 11:14:02.787586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:43.357 [2024-07-26 11:14:02.787595] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.358 [2024-07-26 11:14:02.797100] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:43.358 [2024-07-26 11:14:02.797697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.358 [2024-07-26 11:14:02.797709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fcf30 with addr=10.0.0.2, port=4420 00:25:43.358 [2024-07-26 11:14:02.797716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fcf30 is same with the state(5) to be set 00:25:43.358 [2024-07-26 11:14:02.797730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fcf30 (9): Bad file descriptor 00:25:43.358 [2024-07-26 11:14:02.797750] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:43.358 [2024-07-26 11:14:02.797757] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:43.358 [2024-07-26 11:14:02.797764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:43.358 [2024-07-26 11:14:02.797774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.358 [2024-07-26 11:14:02.807150] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:43.358 [2024-07-26 11:14:02.807638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.358 [2024-07-26 11:14:02.807651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fcf30 with addr=10.0.0.2, port=4420 00:25:43.358 [2024-07-26 11:14:02.807658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fcf30 is same with the state(5) to be set 00:25:43.358 [2024-07-26 11:14:02.807668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fcf30 (9): Bad file descriptor 00:25:43.358 [2024-07-26 11:14:02.807677] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:43.358 [2024-07-26 11:14:02.807683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:43.358 [2024-07-26 11:14:02.807692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:43.358 [2024-07-26 11:14:02.807701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.358 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.358 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:43.358 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:43.358 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:43.358 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:43.358 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:43.358 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:43.358 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:43.358 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:43.358 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:43.358 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:43.358 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.358 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.358 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:43.358 [2024-07-26 11:14:02.817204] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:43.358 [2024-07-26 11:14:02.817694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.358 [2024-07-26 11:14:02.817706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fcf30 with addr=10.0.0.2, port=4420 00:25:43.358 [2024-07-26 11:14:02.817713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fcf30 is same with the state(5) to be set 00:25:43.358 [2024-07-26 11:14:02.817723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fcf30 (9): Bad file descriptor 00:25:43.358 [2024-07-26 11:14:02.817735] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:43.358 [2024-07-26 11:14:02.817742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:43.358 [2024-07-26 11:14:02.817749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:43.358 [2024-07-26 11:14:02.817758] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.358 [2024-07-26 11:14:02.827255] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:43.358 [2024-07-26 11:14:02.827757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.358 [2024-07-26 11:14:02.827770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fcf30 with addr=10.0.0.2, port=4420 00:25:43.358 [2024-07-26 11:14:02.827777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fcf30 is same with the state(5) to be set 00:25:43.358 [2024-07-26 11:14:02.827787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fcf30 (9): Bad file descriptor 00:25:43.358 [2024-07-26 11:14:02.827803] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:43.358 [2024-07-26 11:14:02.827810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:43.358 [2024-07-26 11:14:02.827816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:43.358 [2024-07-26 11:14:02.827826] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.358 [2024-07-26 11:14:02.837314] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:43.358 [2024-07-26 11:14:02.837842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:43.358 [2024-07-26 11:14:02.837853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23fcf30 with addr=10.0.0.2, port=4420 00:25:43.358 [2024-07-26 11:14:02.837861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23fcf30 is same with the state(5) to be set 00:25:43.358 [2024-07-26 11:14:02.837870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23fcf30 (9): Bad file descriptor 00:25:43.358 [2024-07-26 11:14:02.837892] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:43.358 [2024-07-26 11:14:02.837899] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:43.358 [2024-07-26 11:14:02.837905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:43.358 [2024-07-26 11:14:02.837915] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:43.358 [2024-07-26 11:14:02.844907] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:43.358 [2024-07-26 11:14:02.844923] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:43.358 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.618 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:43.618 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:43.619 11:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.619 11:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:25:43.619 11:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:43.619 11:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:43.619 11:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:43.619 11:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:43.619 11:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:43.619 11:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:43.619 11:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:43.619 11:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:43.619 11:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:43.619 11:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.619 11:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:43.619 11:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.619 11:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:43.619 11:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.619 11:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:25:43.619 11:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:43.619 11:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:43.619 11:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:43.619 11:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:43.619 11:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:43.619 11:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:43.619 11:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:43.619 11:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:43.619 11:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:43.619 11:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:43.619 11:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.619 11:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.619 11:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:43.619 11:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.619 11:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:43.619 11:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:43.619 11:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:43.619 11:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:43.619 11:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:43.619 11:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.619 11:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.001 [2024-07-26 11:14:04.159301] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:45.001 [2024-07-26 11:14:04.159318] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:45.001 [2024-07-26 11:14:04.159331] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:45.002 [2024-07-26 11:14:04.246581] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:45.262 [2024-07-26 11:14:04.520099] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:45.262 [2024-07-26 11:14:04.520125] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.263 request: 00:25:45.263 { 00:25:45.263 "name": "nvme", 00:25:45.263 "trtype": "tcp", 00:25:45.263 "traddr": "10.0.0.2", 00:25:45.263 "adrfam": "ipv4", 00:25:45.263 "trsvcid": "8009", 00:25:45.263 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:45.263 "wait_for_attach": true, 00:25:45.263 "method": "bdev_nvme_start_discovery", 00:25:45.263 "req_id": 1 00:25:45.263 } 00:25:45.263 Got JSON-RPC error response 00:25:45.263 response: 00:25:45.263 { 00:25:45.263 "code": -17, 00:25:45.263 "message": "File exists" 00:25:45.263 } 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.263 request: 00:25:45.263 { 00:25:45.263 "name": "nvme_second", 00:25:45.263 "trtype": "tcp", 00:25:45.263 "traddr": "10.0.0.2", 00:25:45.263 "adrfam": "ipv4", 00:25:45.263 "trsvcid": "8009", 00:25:45.263 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:45.263 "wait_for_attach": true, 00:25:45.263 "method": "bdev_nvme_start_discovery", 00:25:45.263 "req_id": 1 00:25:45.263 } 00:25:45.263 Got JSON-RPC error response 00:25:45.263 response: 00:25:45.263 { 00:25:45.263 "code": -17, 00:25:45.263 "message": "File exists" 00:25:45.263 } 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:45.263 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.524 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:45.524 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:45.524 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:45.524 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:45.524 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:45.524 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:45.524 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:45.524 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:45.524 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:45.524 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.524 11:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.465 [2024-07-26 11:14:05.775965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.465 [2024-07-26 11:14:05.775992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x242e2a0 with addr=10.0.0.2, port=8010 00:25:46.465 [2024-07-26 11:14:05.776007] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:46.465 [2024-07-26 11:14:05.776014] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:46.465 [2024-07-26 11:14:05.776020] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:47.406 [2024-07-26 11:14:06.778336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.406 [2024-07-26 11:14:06.778362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x242e2a0 with addr=10.0.0.2, port=8010 00:25:47.406 [2024-07-26 11:14:06.778379] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:47.406 [2024-07-26 11:14:06.778402] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:47.406 [2024-07-26 11:14:06.778408] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:48.407 [2024-07-26 11:14:07.780237] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:48.407 request: 00:25:48.407 { 00:25:48.407 "name": "nvme_second", 00:25:48.407 "trtype": "tcp", 00:25:48.407 "traddr": "10.0.0.2", 00:25:48.407 "adrfam": "ipv4", 00:25:48.407 "trsvcid": "8010", 00:25:48.407 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:48.407 "wait_for_attach": false, 00:25:48.407 "attach_timeout_ms": 3000, 00:25:48.407 "method": "bdev_nvme_start_discovery", 00:25:48.407 "req_id": 1 00:25:48.407 } 00:25:48.407 Got JSON-RPC error response 00:25:48.407 response: 00:25:48.407 { 00:25:48.407 "code": -110, 00:25:48.407 "message": "Connection timed out" 00:25:48.407 } 00:25:48.407 11:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:48.407 11:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:48.407 11:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:48.407 11:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:48.407 11:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:48.407 11:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:48.407 11:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:48.407 11:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:48.407 11:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.407 11:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:48.407 11:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.407 11:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:48.407 11:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.407 11:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:48.407 11:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:48.407 11:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1557578 00:25:48.407 11:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:48.407 11:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:48.407 11:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:25:48.407 11:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:48.407 11:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:25:48.407 11:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:48.407 11:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:48.407 rmmod nvme_tcp 00:25:48.407 rmmod nvme_fabrics 00:25:48.407 rmmod nvme_keyring 00:25:48.407 11:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:48.407 11:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:25:48.407 11:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:25:48.407 11:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1557353 ']' 00:25:48.407 11:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1557353 00:25:48.407 11:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 1557353 ']' 00:25:48.407 11:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 1557353 00:25:48.667 11:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:25:48.667 11:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:48.667 11:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1557353 00:25:48.667 11:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:48.667 11:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:48.667 11:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1557353' 00:25:48.667 killing process with pid 1557353 00:25:48.667 11:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 1557353 00:25:48.667 11:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 1557353 00:25:48.667 11:14:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:48.667 11:14:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:48.667 11:14:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:48.667 11:14:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:48.667 11:14:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:48.667 11:14:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.667 11:14:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:48.667 11:14:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:51.207 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:51.207 00:25:51.207 real 0m17.773s 00:25:51.207 user 0m22.696s 00:25:51.207 sys 0m5.330s 00:25:51.207 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:51.207 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.207 ************************************ 00:25:51.207 END TEST nvmf_host_discovery 00:25:51.207 ************************************ 00:25:51.207 11:14:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:51.207 11:14:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:51.207 11:14:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:51.207 11:14:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.207 ************************************ 00:25:51.207 START TEST nvmf_host_multipath_status 00:25:51.207 ************************************ 00:25:51.207 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:51.207 * Looking for test storage... 00:25:51.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:51.207 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:51.207 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:51.207 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:51.207 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:51.207 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:51.207 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:51.207 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:51.207 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:51.207 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:51.207 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:51.207 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:51.207 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:51.207 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:51.207 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:51.208 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:51.208 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:51.208 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:51.208 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:51.208 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:51.208 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:51.208 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:51.208 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:51.208 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.208 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.208 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.208 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:51.208 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.208 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:25:51.208 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:51.208 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:51.208 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:51.208 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:51.208 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:51.208 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:51.208 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:51.208 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:51.208 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:51.208 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:51.208 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:51.208 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:51.208 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:51.208 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:51.208 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:51.208 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:51.208 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:51.208 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:51.208 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:51.208 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:51.208 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.208 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:51.208 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:51.208 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:51.208 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:51.208 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:25:51.208 11:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:56.486 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:56.486 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:25:56.486 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:56.486 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:56.486 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:56.486 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:56.486 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:56.486 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:25:56.486 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:56.486 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:25:56.486 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:25:56.486 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:25:56.486 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:25:56.486 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:25:56.486 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:25:56.486 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:56.486 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:56.486 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:56.486 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:56.486 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:56.486 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:56.486 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:56.486 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:56.486 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:56.486 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:56.486 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:56.486 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:56.486 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:56.486 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:56.486 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:56.486 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:56.486 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:56.486 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:56.486 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:56.486 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:56.486 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:56.486 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:56.486 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:56.486 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:56.486 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:56.487 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:56.487 Found net devices under 0000:86:00.0: cvl_0_0 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:56.487 Found net devices under 0000:86:00.1: cvl_0_1 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:56.487 11:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:56.487 11:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:56.487 11:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:56.487 11:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:56.487 11:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:56.487 11:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:56.487 11:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:56.487 11:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:56.487 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:56.487 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:25:56.487 00:25:56.487 --- 10.0.0.2 ping statistics --- 00:25:56.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:56.487 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:25:56.487 11:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:56.487 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:56.487 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.447 ms 00:25:56.487 00:25:56.487 --- 10.0.0.1 ping statistics --- 00:25:56.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:56.487 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:25:56.487 11:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:56.487 11:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:25:56.487 11:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:56.487 11:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:56.487 11:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:56.487 11:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:56.487 11:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:56.487 11:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:56.487 11:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:56.487 11:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:56.487 11:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:56.487 11:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:56.487 11:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:56.487 11:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1562642 00:25:56.487 11:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1562642 00:25:56.487 11:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:56.487 11:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1562642 ']' 00:25:56.487 11:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:56.487 11:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:56.487 11:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:56.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:56.487 11:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:56.487 11:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:56.487 [2024-07-26 11:14:15.311779] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:56.487 [2024-07-26 11:14:15.311821] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:56.487 EAL: No free 2048 kB hugepages reported on node 1 00:25:56.487 [2024-07-26 11:14:15.368226] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:56.487 [2024-07-26 11:14:15.447896] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:56.487 [2024-07-26 11:14:15.447932] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:56.487 [2024-07-26 11:14:15.447940] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:56.487 [2024-07-26 11:14:15.447946] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:56.487 [2024-07-26 11:14:15.447951] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:56.487 [2024-07-26 11:14:15.447993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:56.487 [2024-07-26 11:14:15.447996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.747 11:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:56.747 11:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:25:56.747 11:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:56.747 11:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:56.747 11:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:56.747 11:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:56.747 11:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1562642 00:25:56.747 11:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:57.006 [2024-07-26 11:14:16.296754] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:57.006 11:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:57.006 Malloc0 00:25:57.265 11:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:57.265 11:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:57.526 11:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:57.526 [2024-07-26 11:14:17.005895] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:57.786 11:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:57.786 [2024-07-26 11:14:17.174362] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:57.786 11:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1562907 00:25:57.787 11:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:57.787 11:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:57.787 11:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1562907 /var/tmp/bdevperf.sock 00:25:57.787 11:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1562907 ']' 00:25:57.787 11:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:57.787 11:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:57.787 11:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:57.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:57.787 11:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:57.787 11:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:58.724 11:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:58.724 11:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:25:58.724 11:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:58.724 11:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:25:59.293 Nvme0n1 00:25:59.293 11:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:59.862 Nvme0n1 00:25:59.862 11:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:59.862 11:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:01.768 11:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:01.769 11:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:01.769 11:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:02.028 11:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:02.966 11:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:02.966 11:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:02.966 11:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.966 11:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:03.226 11:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.226 11:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:03.226 11:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.226 11:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:03.486 11:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:03.486 11:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:03.486 11:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.486 11:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:03.746 11:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.746 11:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:03.746 11:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:03.746 11:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.746 11:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.746 11:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:03.746 11:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.746 11:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:04.009 11:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:04.009 11:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:04.009 11:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.009 11:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:04.270 11:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:04.270 11:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:04.270 11:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:04.271 11:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:04.530 11:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:05.467 11:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:05.467 11:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:05.467 11:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.467 11:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:05.726 11:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:05.726 11:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:05.726 11:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.726 11:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:05.985 11:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.985 11:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:05.985 11:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:05.985 11:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.245 11:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.245 11:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:06.245 11:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:06.245 11:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.245 11:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.245 11:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:06.245 11:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.245 11:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:06.505 11:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.505 11:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:06.505 11:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.505 11:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:06.764 11:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.764 11:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:06.764 11:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:06.764 11:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:07.024 11:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:07.965 11:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:07.965 11:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:07.965 11:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.965 11:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:08.225 11:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.225 11:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:08.225 11:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.225 11:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:08.486 11:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:08.486 11:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:08.486 11:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.486 11:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:08.746 11:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.746 11:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:08.746 11:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:08.746 11:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.746 11:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.746 11:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:08.746 11:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:08.746 11:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.006 11:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.006 11:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:09.006 11:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.006 11:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:09.266 11:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.266 11:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:09.266 11:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:09.266 11:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:09.526 11:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:10.466 11:14:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:10.466 11:14:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:10.726 11:14:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.726 11:14:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:10.726 11:14:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.726 11:14:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:10.726 11:14:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.726 11:14:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:10.986 11:14:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:10.986 11:14:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:10.986 11:14:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.986 11:14:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:11.246 11:14:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.246 11:14:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:11.246 11:14:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.246 11:14:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:11.246 11:14:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.246 11:14:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:11.246 11:14:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.246 11:14:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:11.506 11:14:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.506 11:14:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:11.506 11:14:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:11.506 11:14:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.767 11:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:11.767 11:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:11.767 11:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:11.767 11:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:12.027 11:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:12.967 11:14:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:12.967 11:14:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:12.967 11:14:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.967 11:14:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:13.227 11:14:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:13.227 11:14:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:13.227 11:14:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.227 11:14:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:13.487 11:14:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:13.487 11:14:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:13.487 11:14:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.487 11:14:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:13.487 11:14:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.487 11:14:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:13.487 11:14:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.487 11:14:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:13.747 11:14:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.747 11:14:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:13.747 11:14:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.747 11:14:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:14.006 11:14:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:14.006 11:14:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:14.006 11:14:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.006 11:14:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:14.006 11:14:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:14.006 11:14:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:14.266 11:14:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:14.266 11:14:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:14.526 11:14:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:15.465 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:15.465 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:15.465 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:15.465 11:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.726 11:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:15.726 11:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:15.726 11:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.726 11:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:15.986 11:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.986 11:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:15.986 11:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:15.986 11:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.986 11:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.986 11:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:15.986 11:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.986 11:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:16.245 11:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.245 11:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:16.245 11:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.245 11:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:16.505 11:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:16.505 11:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:16.505 11:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.505 11:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:16.766 11:14:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.766 11:14:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:16.766 11:14:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:16.766 11:14:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:17.026 11:14:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:17.286 11:14:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:18.227 11:14:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:18.227 11:14:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:18.227 11:14:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.227 11:14:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:18.487 11:14:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.487 11:14:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:18.487 11:14:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.487 11:14:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:18.487 11:14:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.487 11:14:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:18.487 11:14:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.487 11:14:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:18.746 11:14:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.746 11:14:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:18.746 11:14:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.746 11:14:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:19.006 11:14:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.006 11:14:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:19.006 11:14:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.006 11:14:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:19.006 11:14:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.006 11:14:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:19.006 11:14:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.006 11:14:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:19.265 11:14:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.265 11:14:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:19.265 11:14:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:19.525 11:14:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:19.785 11:14:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:20.726 11:14:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:20.726 11:14:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:20.726 11:14:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.726 11:14:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:20.985 11:14:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:20.985 11:14:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:20.985 11:14:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.985 11:14:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:20.985 11:14:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.985 11:14:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:20.985 11:14:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.985 11:14:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:21.313 11:14:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.313 11:14:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:21.314 11:14:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.314 11:14:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:21.314 11:14:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.314 11:14:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:21.314 11:14:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.314 11:14:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:21.574 11:14:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.574 11:14:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:21.574 11:14:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.574 11:14:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:21.834 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.834 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:21.834 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:22.154 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:22.154 11:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:23.118 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:23.118 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:23.118 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.118 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:23.378 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.378 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:23.378 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:23.378 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.638 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.638 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:23.638 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.638 11:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:23.638 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.638 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:23.638 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.638 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:23.898 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.898 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:23.898 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.898 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:24.157 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:24.157 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:24.157 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.157 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:24.417 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:24.417 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:24.417 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:24.417 11:14:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:24.677 11:14:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:25.617 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:25.617 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:25.617 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.617 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:25.877 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.877 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:25.877 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:25.877 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.138 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:26.138 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:26.138 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.138 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:26.138 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.138 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:26.138 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.138 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:26.399 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.399 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:26.399 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.399 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:26.658 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.658 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:26.658 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.658 11:14:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:26.919 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:26.919 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1562907 00:26:26.919 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1562907 ']' 00:26:26.919 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1562907 00:26:26.919 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:26:26.919 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:26.919 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1562907 00:26:26.919 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:26:26.919 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:26:26.919 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1562907' 00:26:26.919 killing process with pid 1562907 00:26:26.919 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1562907 00:26:26.919 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1562907 00:26:26.919 Connection closed with partial response: 00:26:26.919 00:26:26.919 00:26:27.184 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1562907 00:26:27.184 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:27.184 [2024-07-26 11:14:17.232874] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:27.184 [2024-07-26 11:14:17.232925] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1562907 ] 00:26:27.184 EAL: No free 2048 kB hugepages reported on node 1 00:26:27.184 [2024-07-26 11:14:17.282945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.184 [2024-07-26 11:14:17.356509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:27.184 Running I/O for 90 seconds... 00:26:27.184 [2024-07-26 11:14:31.247360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:55544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.184 [2024-07-26 11:14:31.247397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:27.184 [2024-07-26 11:14:31.247433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:55552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.184 [2024-07-26 11:14:31.247442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:27.184 [2024-07-26 11:14:31.247455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:55560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.184 [2024-07-26 11:14:31.247463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:27.184 [2024-07-26 11:14:31.247475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:55568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.184 [2024-07-26 11:14:31.247482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:27.184 [2024-07-26 11:14:31.247495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:55576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.184 [2024-07-26 11:14:31.247501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:27.184 [2024-07-26 11:14:31.247514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.184 [2024-07-26 11:14:31.247521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:27.184 [2024-07-26 11:14:31.247534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:55592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.184 [2024-07-26 11:14:31.247541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:27.184 [2024-07-26 11:14:31.247553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:55600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.184 [2024-07-26 11:14:31.247560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:27.184 [2024-07-26 11:14:31.247705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:55608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.184 [2024-07-26 11:14:31.247715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:27.184 [2024-07-26 11:14:31.247729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:55616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.184 [2024-07-26 11:14:31.247736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:27.184 [2024-07-26 11:14:31.247749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:55624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.184 [2024-07-26 11:14:31.247761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:27.184 [2024-07-26 11:14:31.247774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:55632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.184 [2024-07-26 11:14:31.247782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:27.184 [2024-07-26 11:14:31.247795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:55640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.184 [2024-07-26 11:14:31.247802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:27.184 [2024-07-26 11:14:31.247815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:55648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.184 [2024-07-26 11:14:31.247821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:27.184 [2024-07-26 11:14:31.247835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.184 [2024-07-26 11:14:31.247842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:27.184 [2024-07-26 11:14:31.247855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:55664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.184 [2024-07-26 11:14:31.247861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:27.184 [2024-07-26 11:14:31.247952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:55672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.184 [2024-07-26 11:14:31.247961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:27.184 [2024-07-26 11:14:31.247974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:55680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.184 [2024-07-26 11:14:31.247981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:27.184 [2024-07-26 11:14:31.247993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:55688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.184 [2024-07-26 11:14:31.248000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:27.184 [2024-07-26 11:14:31.248013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:55696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.184 [2024-07-26 11:14:31.248019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:27.184 [2024-07-26 11:14:31.248032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:55704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.184 [2024-07-26 11:14:31.248038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:27.184 [2024-07-26 11:14:31.248056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:55712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.184 [2024-07-26 11:14:31.248063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:27.184 [2024-07-26 11:14:31.248076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:55040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.184 [2024-07-26 11:14:31.248083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:27.184 [2024-07-26 11:14:31.248099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:55048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.184 [2024-07-26 11:14:31.248105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:27.184 [2024-07-26 11:14:31.248118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:55056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.184 [2024-07-26 11:14:31.248125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:27.184 [2024-07-26 11:14:31.248138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:55064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.184 [2024-07-26 11:14:31.248145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.184 [2024-07-26 11:14:31.248158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:55072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.184 [2024-07-26 11:14:31.248165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.184 [2024-07-26 11:14:31.248178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:55080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.185 [2024-07-26 11:14:31.248184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:27.185 [2024-07-26 11:14:31.248197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:55088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.185 [2024-07-26 11:14:31.248204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:27.185 [2024-07-26 11:14:31.248216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:55096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.185 [2024-07-26 11:14:31.248223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:27.185 [2024-07-26 11:14:31.248238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:55104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.185 [2024-07-26 11:14:31.248246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:27.185 [2024-07-26 11:14:31.248259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:55112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.185 [2024-07-26 11:14:31.248265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:27.185 [2024-07-26 11:14:31.248278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:55120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.185 [2024-07-26 11:14:31.248285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:27.185 [2024-07-26 11:14:31.248298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:55128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.185 [2024-07-26 11:14:31.248305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:27.185 [2024-07-26 11:14:31.248318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:55136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.185 [2024-07-26 11:14:31.248324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:27.185 [2024-07-26 11:14:31.248342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:55144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.185 [2024-07-26 11:14:31.248349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:27.185 [2024-07-26 11:14:31.248362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:55152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.185 [2024-07-26 11:14:31.248369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:27.185 [2024-07-26 11:14:31.248381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:55720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.185 [2024-07-26 11:14:31.248388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:27.185 [2024-07-26 11:14:31.248402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:55728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.185 [2024-07-26 11:14:31.248408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:27.185 [2024-07-26 11:14:31.249147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:55736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.185 [2024-07-26 11:14:31.249156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:27.185 [2024-07-26 11:14:31.249172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:55744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.185 [2024-07-26 11:14:31.249179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:27.185 [2024-07-26 11:14:31.249194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:55752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.185 [2024-07-26 11:14:31.249201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:27.185 [2024-07-26 11:14:31.249215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:55760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.185 [2024-07-26 11:14:31.249222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:27.185 [2024-07-26 11:14:31.249237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:55768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.185 [2024-07-26 11:14:31.249244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:27.185 [2024-07-26 11:14:31.249259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:55776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.185 [2024-07-26 11:14:31.249266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:27.185 [2024-07-26 11:14:31.249281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:55784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.185 [2024-07-26 11:14:31.249287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:27.185 [2024-07-26 11:14:31.249302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:55792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.185 [2024-07-26 11:14:31.249310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:27.185 [2024-07-26 11:14:31.249324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:55800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.185 [2024-07-26 11:14:31.249333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:27.185 [2024-07-26 11:14:31.249347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:55808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.185 [2024-07-26 11:14:31.249354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:27.185 [2024-07-26 11:14:31.249369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:55816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.185 [2024-07-26 11:14:31.249375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:27.185 [2024-07-26 11:14:31.249390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:55824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.185 [2024-07-26 11:14:31.249397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:27.185 [2024-07-26 11:14:31.249412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.185 [2024-07-26 11:14:31.249419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:27.185 [2024-07-26 11:14:31.249434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:55840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.185 [2024-07-26 11:14:31.249440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:27.185 [2024-07-26 11:14:31.249455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:55848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.185 [2024-07-26 11:14:31.249462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:27.185 [2024-07-26 11:14:31.249476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:55856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.185 [2024-07-26 11:14:31.249483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:27.185 [2024-07-26 11:14:31.249499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:55864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.185 [2024-07-26 11:14:31.249505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:27.185 [2024-07-26 11:14:31.249559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:55872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.185 [2024-07-26 11:14:31.249567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:27.185 [2024-07-26 11:14:31.249584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:55880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.185 [2024-07-26 11:14:31.249590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:27.185 [2024-07-26 11:14:31.249607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:55888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.185 [2024-07-26 11:14:31.249614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.185 [2024-07-26 11:14:31.249629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:55896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.185 [2024-07-26 11:14:31.249638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.185 [2024-07-26 11:14:31.249654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:55160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.185 [2024-07-26 11:14:31.249661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:27.185 [2024-07-26 11:14:31.249677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:55168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.185 [2024-07-26 11:14:31.249684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:27.185 [2024-07-26 11:14:31.249700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:55176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.185 [2024-07-26 11:14:31.249707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:27.185 [2024-07-26 11:14:31.249723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:55184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.185 [2024-07-26 11:14:31.249730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:27.185 [2024-07-26 11:14:31.249746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:55192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.185 [2024-07-26 11:14:31.249752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:27.185 [2024-07-26 11:14:31.249769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:55200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.186 [2024-07-26 11:14:31.249775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:27.186 [2024-07-26 11:14:31.249791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:55208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.186 [2024-07-26 11:14:31.249798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:27.186 [2024-07-26 11:14:31.249814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:55216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.186 [2024-07-26 11:14:31.249821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:27.186 [2024-07-26 11:14:31.249838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:55224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.186 [2024-07-26 11:14:31.249846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:27.186 [2024-07-26 11:14:31.249863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:55232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.186 [2024-07-26 11:14:31.249869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:27.186 [2024-07-26 11:14:31.249885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:55240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.186 [2024-07-26 11:14:31.249891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:27.186 [2024-07-26 11:14:31.249908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:55248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.186 [2024-07-26 11:14:31.249916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:27.186 [2024-07-26 11:14:31.249933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:55256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.186 [2024-07-26 11:14:31.249939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:27.186 [2024-07-26 11:14:31.250002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:55264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.186 [2024-07-26 11:14:31.250010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:27.186 [2024-07-26 11:14:31.250029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:55272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.186 [2024-07-26 11:14:31.250036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:27.186 [2024-07-26 11:14:31.250058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:55280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.186 [2024-07-26 11:14:31.250065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:27.186 [2024-07-26 11:14:31.250083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:55904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.186 [2024-07-26 11:14:31.250089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:27.186 [2024-07-26 11:14:31.250106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:55288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.186 [2024-07-26 11:14:31.250114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:27.186 [2024-07-26 11:14:31.250131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:55296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.186 [2024-07-26 11:14:31.250138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:27.186 [2024-07-26 11:14:31.250155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:55304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.186 [2024-07-26 11:14:31.250162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:27.186 [2024-07-26 11:14:31.250180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.186 [2024-07-26 11:14:31.250186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:27.186 [2024-07-26 11:14:31.250203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:55320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.186 [2024-07-26 11:14:31.250210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:27.186 [2024-07-26 11:14:31.250228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:55328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.186 [2024-07-26 11:14:31.250234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:27.186 [2024-07-26 11:14:31.250251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:55336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.186 [2024-07-26 11:14:31.250258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:27.186 [2024-07-26 11:14:31.250278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:55344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.186 [2024-07-26 11:14:31.250285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:27.186 [2024-07-26 11:14:31.250302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:55352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.186 [2024-07-26 11:14:31.250309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:27.186 [2024-07-26 11:14:31.250327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:55360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.186 [2024-07-26 11:14:31.250333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:27.186 [2024-07-26 11:14:31.250350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:55368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.186 [2024-07-26 11:14:31.250357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:27.186 [2024-07-26 11:14:31.250375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:55376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.186 [2024-07-26 11:14:31.250381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:27.186 [2024-07-26 11:14:31.250399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:55384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.186 [2024-07-26 11:14:31.250406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:27.186 [2024-07-26 11:14:31.250423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:55392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.186 [2024-07-26 11:14:31.250430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.186 [2024-07-26 11:14:31.250447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:55400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.186 [2024-07-26 11:14:31.250453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.186 [2024-07-26 11:14:31.250471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:55408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.186 [2024-07-26 11:14:31.250478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:27.186 [2024-07-26 11:14:31.250495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:55416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.186 [2024-07-26 11:14:31.250502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:27.186 [2024-07-26 11:14:31.250519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:55424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.186 [2024-07-26 11:14:31.250526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:27.186 [2024-07-26 11:14:31.250543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:55432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.186 [2024-07-26 11:14:31.250550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:27.186 [2024-07-26 11:14:31.250568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:55440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.186 [2024-07-26 11:14:31.250576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:27.186 [2024-07-26 11:14:31.250593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:55448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.186 [2024-07-26 11:14:31.250599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:27.186 [2024-07-26 11:14:31.250616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:55456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.186 [2024-07-26 11:14:31.250623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:27.186 [2024-07-26 11:14:31.250640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:55464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.186 [2024-07-26 11:14:31.250647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:27.187 [2024-07-26 11:14:31.250664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.187 [2024-07-26 11:14:31.250671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:27.187 [2024-07-26 11:14:31.250688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:55480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.187 [2024-07-26 11:14:31.250695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:27.187 [2024-07-26 11:14:31.250712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:55488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.187 [2024-07-26 11:14:31.250719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:27.187 [2024-07-26 11:14:31.250737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:55496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.187 [2024-07-26 11:14:31.250744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:27.187 [2024-07-26 11:14:31.250761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:55504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.187 [2024-07-26 11:14:31.250768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:27.187 [2024-07-26 11:14:31.250785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:55512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.187 [2024-07-26 11:14:31.250792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:27.187 [2024-07-26 11:14:31.250809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:55520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.187 [2024-07-26 11:14:31.250815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:27.187 [2024-07-26 11:14:31.250832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:55528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.187 [2024-07-26 11:14:31.250839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:27.187 [2024-07-26 11:14:31.250856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:55536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.187 [2024-07-26 11:14:31.250864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:27.187 [2024-07-26 11:14:31.250881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:55912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.187 [2024-07-26 11:14:31.250888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:27.187 [2024-07-26 11:14:31.250905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.187 [2024-07-26 11:14:31.250914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:27.187 [2024-07-26 11:14:31.250932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:55928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.187 [2024-07-26 11:14:31.250939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:27.187 [2024-07-26 11:14:31.250956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:55936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.187 [2024-07-26 11:14:31.250962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:27.187 [2024-07-26 11:14:31.250979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:55944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.187 [2024-07-26 11:14:31.250986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:27.187 [2024-07-26 11:14:31.251003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:55952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.187 [2024-07-26 11:14:31.251009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:27.187 [2024-07-26 11:14:31.251026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:55960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.187 [2024-07-26 11:14:31.251033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:27.187 [2024-07-26 11:14:31.251053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:55968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.187 [2024-07-26 11:14:31.251060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:27.187 [2024-07-26 11:14:31.251077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:55976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.187 [2024-07-26 11:14:31.251084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:27.187 [2024-07-26 11:14:31.251101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:55984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.187 [2024-07-26 11:14:31.251107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:27.187 [2024-07-26 11:14:31.251124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:55992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.187 [2024-07-26 11:14:31.251130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:27.187 [2024-07-26 11:14:31.251148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:56000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.187 [2024-07-26 11:14:31.251156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:27.187 [2024-07-26 11:14:31.251173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:56008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.187 [2024-07-26 11:14:31.251180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:27.187 [2024-07-26 11:14:31.251197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:56016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.187 [2024-07-26 11:14:31.251204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.187 [2024-07-26 11:14:31.251221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:56024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.187 [2024-07-26 11:14:31.251228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:27.187 [2024-07-26 11:14:31.251245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:56032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.187 [2024-07-26 11:14:31.251251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:27.187 [2024-07-26 11:14:44.035920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:88064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.187 [2024-07-26 11:14:44.035958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:27.187 [2024-07-26 11:14:44.035992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.187 [2024-07-26 11:14:44.036001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:27.187 [2024-07-26 11:14:44.036014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:88096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.187 [2024-07-26 11:14:44.036021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:27.187 [2024-07-26 11:14:44.036033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.187 [2024-07-26 11:14:44.036040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:27.187 [2024-07-26 11:14:44.036058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:88128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.187 [2024-07-26 11:14:44.036065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:27.187 [2024-07-26 11:14:44.036078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:88144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.187 [2024-07-26 11:14:44.036084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.187 [2024-07-26 11:14:44.036219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:88160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.187 [2024-07-26 11:14:44.036229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:27.187 [2024-07-26 11:14:44.036241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:88032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.187 [2024-07-26 11:14:44.036249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:27.187 [2024-07-26 11:14:44.036265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:88176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.188 [2024-07-26 11:14:44.036272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:27.188 [2024-07-26 11:14:44.036285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:88192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.188 [2024-07-26 11:14:44.036292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:27.188 [2024-07-26 11:14:44.036304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:88208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.188 [2024-07-26 11:14:44.036311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:27.188 [2024-07-26 11:14:44.036324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:88224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.188 [2024-07-26 11:14:44.036331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:27.188 [2024-07-26 11:14:44.036344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.188 [2024-07-26 11:14:44.036351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:27.188 [2024-07-26 11:14:44.036364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:88256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.188 [2024-07-26 11:14:44.036371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:27.188 [2024-07-26 11:14:44.036383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:88272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.188 [2024-07-26 11:14:44.036390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:27.188 [2024-07-26 11:14:44.036402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:88288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.188 [2024-07-26 11:14:44.036409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:27.188 [2024-07-26 11:14:44.036421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:88304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.188 [2024-07-26 11:14:44.036428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:27.188 [2024-07-26 11:14:44.036441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:88320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.188 [2024-07-26 11:14:44.036448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:27.188 [2024-07-26 11:14:44.036461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:88336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.188 [2024-07-26 11:14:44.036468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:27.188 [2024-07-26 11:14:44.036480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:88352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.188 [2024-07-26 11:14:44.036487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:27.188 [2024-07-26 11:14:44.036501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:88368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.188 [2024-07-26 11:14:44.036507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:27.188 [2024-07-26 11:14:44.036520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:88384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.188 [2024-07-26 11:14:44.036527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:27.188 [2024-07-26 11:14:44.036539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:88400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.188 [2024-07-26 11:14:44.036546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:27.188 [2024-07-26 11:14:44.036558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:88416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.188 [2024-07-26 11:14:44.036565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:27.188 [2024-07-26 11:14:44.036577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:88432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.188 [2024-07-26 11:14:44.036584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:27.188 [2024-07-26 11:14:44.036596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:88448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.188 [2024-07-26 11:14:44.036603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:27.188 [2024-07-26 11:14:44.036615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:88464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.188 [2024-07-26 11:14:44.036622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:27.188 [2024-07-26 11:14:44.036635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:88480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.188 [2024-07-26 11:14:44.036641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:27.188 [2024-07-26 11:14:44.036653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:87976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.188 [2024-07-26 11:14:44.036660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:27.188 [2024-07-26 11:14:44.036673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:88008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.188 [2024-07-26 11:14:44.036679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:27.188 [2024-07-26 11:14:44.038352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:88496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.188 [2024-07-26 11:14:44.038372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:27.188 [2024-07-26 11:14:44.038388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.188 [2024-07-26 11:14:44.038395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:27.188 [2024-07-26 11:14:44.038407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:88528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.188 [2024-07-26 11:14:44.038418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:27.188 [2024-07-26 11:14:44.038431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:88544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.188 [2024-07-26 11:14:44.038437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:27.188 [2024-07-26 11:14:44.038450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.188 [2024-07-26 11:14:44.038457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:27.188 [2024-07-26 11:14:44.038469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:88576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.188 [2024-07-26 11:14:44.038477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:27.188 [2024-07-26 11:14:44.038491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:88592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.188 [2024-07-26 11:14:44.038498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:27.188 [2024-07-26 11:14:44.038511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.188 [2024-07-26 11:14:44.038517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.188 [2024-07-26 11:14:44.038530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.188 [2024-07-26 11:14:44.038537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:27.188 [2024-07-26 11:14:44.038549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:88640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.188 [2024-07-26 11:14:44.038556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:27.188 [2024-07-26 11:14:44.038568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:88656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.188 [2024-07-26 11:14:44.038575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:27.189 [2024-07-26 11:14:44.038588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:88672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.189 [2024-07-26 11:14:44.038595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:27.189 [2024-07-26 11:14:44.038607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:88688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.189 [2024-07-26 11:14:44.038614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:27.189 [2024-07-26 11:14:44.038626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:88704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.189 [2024-07-26 11:14:44.038633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:27.189 [2024-07-26 11:14:44.038646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.189 [2024-07-26 11:14:44.038654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:27.189 [2024-07-26 11:14:44.038666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:88736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.189 [2024-07-26 11:14:44.038673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:27.189 [2024-07-26 11:14:44.038685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:88752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.189 [2024-07-26 11:14:44.038692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:27.189 [2024-07-26 11:14:44.038704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.189 [2024-07-26 11:14:44.038711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:27.189 [2024-07-26 11:14:44.038723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:88040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.189 [2024-07-26 11:14:44.038730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:27.189 [2024-07-26 11:14:44.038742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:88792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.189 [2024-07-26 11:14:44.038749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:27.189 [2024-07-26 11:14:44.038760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:88808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.189 [2024-07-26 11:14:44.038767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:27.189 [2024-07-26 11:14:44.038780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:88824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:27.189 [2024-07-26 11:14:44.038787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:27.189 Received shutdown signal, test time was about 27.018824 seconds 00:26:27.189 00:26:27.189 Latency(us) 00:26:27.189 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:27.189 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:27.189 Verification LBA range: start 0x0 length 0x4000 00:26:27.189 Nvme0n1 : 27.02 10415.71 40.69 0.00 0.00 12267.18 669.61 3034487.76 00:26:27.189 =================================================================================================================== 00:26:27.189 Total : 10415.71 40.69 0.00 0.00 12267.18 669.61 3034487.76 00:26:27.189 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:27.189 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:27.189 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:27.189 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:27.189 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:27.189 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:26:27.189 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:27.189 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:26:27.189 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:27.189 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:27.189 rmmod nvme_tcp 00:26:27.189 rmmod nvme_fabrics 00:26:27.189 rmmod nvme_keyring 00:26:27.189 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:27.189 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:26:27.189 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:26:27.189 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1562642 ']' 00:26:27.189 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1562642 00:26:27.189 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1562642 ']' 00:26:27.189 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1562642 00:26:27.189 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:26:27.189 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:27.189 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1562642 00:26:27.449 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:27.449 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:27.449 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1562642' 00:26:27.449 killing process with pid 1562642 00:26:27.449 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1562642 00:26:27.449 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1562642 00:26:27.449 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:27.449 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:27.449 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:27.449 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:27.449 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:27.449 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:27.449 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:27.449 11:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:29.989 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:29.989 00:26:29.989 real 0m38.721s 00:26:29.989 user 1m45.723s 00:26:29.989 sys 0m10.379s 00:26:29.989 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:29.989 11:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:29.989 ************************************ 00:26:29.989 END TEST nvmf_host_multipath_status 00:26:29.989 ************************************ 00:26:29.989 11:14:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:29.989 11:14:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:29.989 11:14:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:29.989 11:14:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.989 ************************************ 00:26:29.989 START TEST nvmf_discovery_remove_ifc 00:26:29.989 ************************************ 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:29.989 * Looking for test storage... 00:26:29.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:26:29.989 11:14:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:35.288 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:35.288 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:35.288 Found net devices under 0000:86:00.0: cvl_0_0 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:35.288 Found net devices under 0000:86:00.1: cvl_0_1 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:35.288 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:35.289 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:35.289 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:35.289 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:35.289 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:35.289 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:35.289 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:35.289 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:35.289 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:35.289 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:35.289 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:35.289 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:35.289 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:35.289 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:35.289 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:35.289 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:35.289 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:35.289 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:35.289 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:35.289 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:35.289 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:35.289 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:35.289 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:26:35.289 00:26:35.289 --- 10.0.0.2 ping statistics --- 00:26:35.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:35.289 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:26:35.289 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:35.289 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:35.289 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:26:35.289 00:26:35.289 --- 10.0.0.1 ping statistics --- 00:26:35.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:35.289 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:26:35.289 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:35.289 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:26:35.289 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:35.289 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:35.289 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:35.289 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:35.289 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:35.289 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:35.289 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:35.289 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:35.289 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:35.289 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:35.289 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:35.289 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1571340 00:26:35.289 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1571340 00:26:35.289 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:35.289 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1571340 ']' 00:26:35.289 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:35.289 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:35.289 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:35.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:35.289 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:35.289 11:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:35.289 [2024-07-26 11:14:54.611181] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:35.289 [2024-07-26 11:14:54.611226] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:35.289 EAL: No free 2048 kB hugepages reported on node 1 00:26:35.289 [2024-07-26 11:14:54.669571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.289 [2024-07-26 11:14:54.744849] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:35.289 [2024-07-26 11:14:54.744888] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:35.289 [2024-07-26 11:14:54.744895] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:35.289 [2024-07-26 11:14:54.744901] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:35.289 [2024-07-26 11:14:54.744906] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:35.289 [2024-07-26 11:14:54.744941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:36.230 11:14:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:36.230 11:14:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:26:36.230 11:14:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:36.230 11:14:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:36.230 11:14:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:36.230 11:14:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:36.230 11:14:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:36.230 11:14:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.230 11:14:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:36.230 [2024-07-26 11:14:55.460031] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:36.230 [2024-07-26 11:14:55.468196] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:36.230 null0 00:26:36.230 [2024-07-26 11:14:55.500184] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:36.230 11:14:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.230 11:14:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1571458 00:26:36.230 11:14:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1571458 /tmp/host.sock 00:26:36.230 11:14:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:36.230 11:14:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1571458 ']' 00:26:36.230 11:14:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:26:36.230 11:14:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:36.230 11:14:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:36.230 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:36.230 11:14:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:36.230 11:14:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:36.230 [2024-07-26 11:14:55.566181] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:36.230 [2024-07-26 11:14:55.566221] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1571458 ] 00:26:36.230 EAL: No free 2048 kB hugepages reported on node 1 00:26:36.230 [2024-07-26 11:14:55.618556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.230 [2024-07-26 11:14:55.691285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:37.170 11:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:37.170 11:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:26:37.170 11:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:37.170 11:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:37.170 11:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.170 11:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:37.170 11:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.170 11:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:37.170 11:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.170 11:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:37.170 11:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.170 11:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:37.170 11:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.170 11:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:38.110 [2024-07-26 11:14:57.479651] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:38.110 [2024-07-26 11:14:57.479669] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:38.110 [2024-07-26 11:14:57.479683] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:38.110 [2024-07-26 11:14:57.567954] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:38.370 [2024-07-26 11:14:57.633292] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:38.370 [2024-07-26 11:14:57.633334] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:38.370 [2024-07-26 11:14:57.633355] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:38.370 [2024-07-26 11:14:57.633368] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:38.370 [2024-07-26 11:14:57.633385] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:38.370 11:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.370 11:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:38.370 11:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:38.370 [2024-07-26 11:14:57.638798] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x18d0e60 was disconnected and freed. delete nvme_qpair. 00:26:38.370 11:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:38.370 11:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:38.370 11:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.370 11:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:38.370 11:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:38.370 11:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:38.370 11:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.370 11:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:38.370 11:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:38.370 11:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:38.370 11:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:38.370 11:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:38.370 11:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:38.370 11:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:38.370 11:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.370 11:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:38.370 11:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:38.370 11:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:38.370 11:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.370 11:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:38.370 11:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:39.751 11:14:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:39.751 11:14:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:39.752 11:14:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:39.752 11:14:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.752 11:14:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:39.752 11:14:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:39.752 11:14:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:39.752 11:14:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.752 11:14:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:39.752 11:14:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:40.690 11:14:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:40.690 11:14:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:40.690 11:14:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:40.690 11:14:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.690 11:14:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:40.690 11:14:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:40.690 11:14:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:40.690 11:14:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.690 11:14:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:40.690 11:14:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:41.666 11:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:41.666 11:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:41.666 11:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:41.666 11:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:41.666 11:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.666 11:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:41.666 11:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:41.666 11:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.666 11:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:41.666 11:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:42.605 11:15:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:42.605 11:15:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:42.605 11:15:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:42.605 11:15:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.605 11:15:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:42.605 11:15:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:42.605 11:15:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:42.605 11:15:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.606 11:15:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:42.606 11:15:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:43.987 11:15:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:43.987 11:15:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:43.987 11:15:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:43.987 11:15:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.987 11:15:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:43.987 11:15:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:43.987 11:15:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:43.987 11:15:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.987 [2024-07-26 11:15:03.074473] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:43.987 [2024-07-26 11:15:03.074514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:43.987 [2024-07-26 11:15:03.074524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.987 [2024-07-26 11:15:03.074533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:43.987 [2024-07-26 11:15:03.074540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.987 [2024-07-26 11:15:03.074548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:43.987 [2024-07-26 11:15:03.074559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.987 [2024-07-26 11:15:03.074567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:43.987 [2024-07-26 11:15:03.074574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.987 [2024-07-26 11:15:03.074582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:43.987 [2024-07-26 11:15:03.074588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.987 [2024-07-26 11:15:03.074595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976b0 is same with the state(5) to be set 00:26:43.987 [2024-07-26 11:15:03.084493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18976b0 (9): Bad file descriptor 00:26:43.987 11:15:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:43.987 11:15:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:43.987 [2024-07-26 11:15:03.094532] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:44.928 11:15:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:44.928 11:15:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:44.928 11:15:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:44.928 11:15:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:44.928 11:15:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.928 11:15:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:44.928 11:15:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:44.928 [2024-07-26 11:15:04.130061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:44.928 [2024-07-26 11:15:04.130100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18976b0 with addr=10.0.0.2, port=4420 00:26:44.928 [2024-07-26 11:15:04.130114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18976b0 is same with the state(5) to be set 00:26:44.928 [2024-07-26 11:15:04.130138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18976b0 (9): Bad file descriptor 00:26:44.928 [2024-07-26 11:15:04.130535] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:44.928 [2024-07-26 11:15:04.130560] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:44.928 [2024-07-26 11:15:04.130570] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:44.928 [2024-07-26 11:15:04.130580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:44.928 [2024-07-26 11:15:04.130596] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.928 [2024-07-26 11:15:04.130607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:44.928 11:15:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.928 11:15:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:44.928 11:15:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:45.868 [2024-07-26 11:15:05.133087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:45.868 [2024-07-26 11:15:05.133107] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:45.868 [2024-07-26 11:15:05.133117] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:45.868 [2024-07-26 11:15:05.133124] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:26:45.868 [2024-07-26 11:15:05.133134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.868 [2024-07-26 11:15:05.133151] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:45.868 [2024-07-26 11:15:05.133169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:45.868 [2024-07-26 11:15:05.133178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.868 [2024-07-26 11:15:05.133187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:45.868 [2024-07-26 11:15:05.133193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.868 [2024-07-26 11:15:05.133201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:45.868 [2024-07-26 11:15:05.133208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.868 [2024-07-26 11:15:05.133214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:45.868 [2024-07-26 11:15:05.133221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.868 [2024-07-26 11:15:05.133228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:45.868 [2024-07-26 11:15:05.133235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.868 [2024-07-26 11:15:05.133240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:45.868 [2024-07-26 11:15:05.133415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1896a80 (9): Bad file descriptor 00:26:45.868 [2024-07-26 11:15:05.134425] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:45.868 [2024-07-26 11:15:05.134435] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:45.868 11:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:45.868 11:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:45.868 11:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:45.868 11:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.868 11:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:45.868 11:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:45.868 11:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:45.868 11:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.868 11:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:45.868 11:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:45.868 11:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:45.868 11:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:45.868 11:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:45.868 11:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:45.868 11:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.868 11:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:45.868 11:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:45.868 11:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:45.868 11:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:45.868 11:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.868 11:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:45.868 11:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:47.251 11:15:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:47.251 11:15:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:47.251 11:15:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:47.251 11:15:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.251 11:15:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:47.251 11:15:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:47.251 11:15:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:47.251 11:15:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.251 11:15:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:47.251 11:15:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:47.821 [2024-07-26 11:15:07.186233] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:47.821 [2024-07-26 11:15:07.186250] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:47.821 [2024-07-26 11:15:07.186263] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:47.821 [2024-07-26 11:15:07.274542] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:48.081 11:15:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:48.081 11:15:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:48.081 11:15:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:48.081 11:15:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.081 11:15:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:48.081 11:15:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:48.081 11:15:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:48.081 11:15:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.081 11:15:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:48.081 11:15:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:48.081 [2024-07-26 11:15:07.498720] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:48.081 [2024-07-26 11:15:07.498758] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:48.081 [2024-07-26 11:15:07.498776] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:48.081 [2024-07-26 11:15:07.498789] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:48.081 [2024-07-26 11:15:07.498795] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:48.081 [2024-07-26 11:15:07.504825] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x189e180 was disconnected and freed. delete nvme_qpair. 00:26:49.022 11:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:49.023 11:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:49.023 11:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:49.023 11:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.023 11:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:49.023 11:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:49.023 11:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:49.023 11:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.023 11:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:49.023 11:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:49.023 11:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1571458 00:26:49.023 11:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1571458 ']' 00:26:49.023 11:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1571458 00:26:49.023 11:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:26:49.023 11:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:49.023 11:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1571458 00:26:49.283 11:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:49.283 11:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:49.283 11:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1571458' 00:26:49.283 killing process with pid 1571458 00:26:49.283 11:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1571458 00:26:49.283 11:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1571458 00:26:49.283 11:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:49.283 11:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:49.283 11:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:26:49.283 11:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:49.283 11:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:26:49.283 11:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:49.283 11:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:49.283 rmmod nvme_tcp 00:26:49.283 rmmod nvme_fabrics 00:26:49.283 rmmod nvme_keyring 00:26:49.283 11:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:49.544 11:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:26:49.544 11:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:26:49.544 11:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1571340 ']' 00:26:49.544 11:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1571340 00:26:49.544 11:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1571340 ']' 00:26:49.544 11:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1571340 00:26:49.544 11:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:26:49.544 11:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:49.544 11:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1571340 00:26:49.544 11:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:49.544 11:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:49.544 11:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1571340' 00:26:49.544 killing process with pid 1571340 00:26:49.544 11:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1571340 00:26:49.545 11:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1571340 00:26:49.545 11:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:49.545 11:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:49.545 11:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:49.545 11:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:49.545 11:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:49.545 11:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:49.545 11:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:49.545 11:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:52.088 11:15:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:52.088 00:26:52.088 real 0m22.047s 00:26:52.088 user 0m28.677s 00:26:52.088 sys 0m5.338s 00:26:52.088 11:15:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:52.088 11:15:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:52.088 ************************************ 00:26:52.088 END TEST nvmf_discovery_remove_ifc 00:26:52.088 ************************************ 00:26:52.088 11:15:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:52.088 11:15:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:52.088 11:15:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:52.088 11:15:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.088 ************************************ 00:26:52.088 START TEST nvmf_identify_kernel_target 00:26:52.088 ************************************ 00:26:52.088 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:52.088 * Looking for test storage... 00:26:52.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:52.088 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:52.088 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:52.088 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:52.088 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:52.088 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:52.088 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:52.088 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:52.088 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:52.088 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:52.088 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:52.088 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:52.088 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:52.088 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:52.088 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:52.088 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:52.088 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:52.088 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:52.088 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:52.088 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:52.088 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:52.088 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:52.088 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:52.088 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.088 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.088 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.088 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:52.088 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.088 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:26:52.088 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:52.088 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:52.089 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:52.089 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:52.089 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:52.089 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:52.089 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:52.089 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:52.089 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:52.089 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:52.089 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:52.089 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:52.089 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:52.089 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:52.089 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:52.089 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:52.089 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:52.089 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:52.089 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:52.089 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:26:52.089 11:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:57.378 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:57.378 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:26:57.378 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:57.378 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:57.378 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:57.378 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:57.378 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:57.378 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:26:57.378 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:57.378 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:26:57.378 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:26:57.378 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:26:57.378 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:26:57.378 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:57.379 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:57.379 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:57.379 Found net devices under 0000:86:00.0: cvl_0_0 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:57.379 Found net devices under 0000:86:00.1: cvl_0_1 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:57.379 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:57.380 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:57.380 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:26:57.380 00:26:57.380 --- 10.0.0.2 ping statistics --- 00:26:57.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:57.380 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:57.380 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:57.380 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.405 ms 00:26:57.380 00:26:57.380 --- 10.0.0.1 ping statistics --- 00:26:57.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:57.380 rtt min/avg/max/mdev = 0.405/0.405/0.405/0.000 ms 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:57.380 11:15:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:59.424 Waiting for block devices as requested 00:26:59.424 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:59.424 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:59.684 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:59.684 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:59.684 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:59.684 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:59.684 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:59.944 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:59.944 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:59.944 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:59.944 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:00.203 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:00.203 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:00.203 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:00.463 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:00.463 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:00.463 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:00.463 11:15:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:00.463 11:15:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:00.463 11:15:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:00.463 11:15:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:00.463 11:15:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:00.463 11:15:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:00.463 11:15:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:00.463 11:15:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:00.463 11:15:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:00.463 No valid GPT data, bailing 00:27:00.463 11:15:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:00.463 11:15:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:27:00.463 11:15:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:27:00.463 11:15:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:00.463 11:15:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:00.463 11:15:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:00.463 11:15:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:00.463 11:15:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:00.463 11:15:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:00.463 11:15:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:27:00.463 11:15:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:00.463 11:15:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:27:00.463 11:15:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:00.463 11:15:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:27:00.463 11:15:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:27:00.463 11:15:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:27:00.463 11:15:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:00.724 11:15:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:27:00.724 00:27:00.724 Discovery Log Number of Records 2, Generation counter 2 00:27:00.724 =====Discovery Log Entry 0====== 00:27:00.724 trtype: tcp 00:27:00.724 adrfam: ipv4 00:27:00.724 subtype: current discovery subsystem 00:27:00.724 treq: not specified, sq flow control disable supported 00:27:00.724 portid: 1 00:27:00.724 trsvcid: 4420 00:27:00.724 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:00.724 traddr: 10.0.0.1 00:27:00.724 eflags: none 00:27:00.724 sectype: none 00:27:00.724 =====Discovery Log Entry 1====== 00:27:00.724 trtype: tcp 00:27:00.724 adrfam: ipv4 00:27:00.724 subtype: nvme subsystem 00:27:00.724 treq: not specified, sq flow control disable supported 00:27:00.724 portid: 1 00:27:00.724 trsvcid: 4420 00:27:00.724 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:00.724 traddr: 10.0.0.1 00:27:00.724 eflags: none 00:27:00.724 sectype: none 00:27:00.724 11:15:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:00.724 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:00.724 EAL: No free 2048 kB hugepages reported on node 1 00:27:00.724 ===================================================== 00:27:00.724 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:00.724 ===================================================== 00:27:00.724 Controller Capabilities/Features 00:27:00.724 ================================ 00:27:00.724 Vendor ID: 0000 00:27:00.724 Subsystem Vendor ID: 0000 00:27:00.724 Serial Number: 793401b0e6ecefec29a3 00:27:00.724 Model Number: Linux 00:27:00.724 Firmware Version: 6.7.0-68 00:27:00.724 Recommended Arb Burst: 0 00:27:00.724 IEEE OUI Identifier: 00 00 00 00:27:00.724 Multi-path I/O 00:27:00.724 May have multiple subsystem ports: No 00:27:00.724 May have multiple controllers: No 00:27:00.724 Associated with SR-IOV VF: No 00:27:00.724 Max Data Transfer Size: Unlimited 00:27:00.724 Max Number of Namespaces: 0 00:27:00.724 Max Number of I/O Queues: 1024 00:27:00.724 NVMe Specification Version (VS): 1.3 00:27:00.724 NVMe Specification Version (Identify): 1.3 00:27:00.724 Maximum Queue Entries: 1024 00:27:00.724 Contiguous Queues Required: No 00:27:00.724 Arbitration Mechanisms Supported 00:27:00.724 Weighted Round Robin: Not Supported 00:27:00.724 Vendor Specific: Not Supported 00:27:00.724 Reset Timeout: 7500 ms 00:27:00.724 Doorbell Stride: 4 bytes 00:27:00.724 NVM Subsystem Reset: Not Supported 00:27:00.724 Command Sets Supported 00:27:00.724 NVM Command Set: Supported 00:27:00.724 Boot Partition: Not Supported 00:27:00.724 Memory Page Size Minimum: 4096 bytes 00:27:00.724 Memory Page Size Maximum: 4096 bytes 00:27:00.724 Persistent Memory Region: Not Supported 00:27:00.724 Optional Asynchronous Events Supported 00:27:00.724 Namespace Attribute Notices: Not Supported 00:27:00.724 Firmware Activation Notices: Not Supported 00:27:00.724 ANA Change Notices: Not Supported 00:27:00.724 PLE Aggregate Log Change Notices: Not Supported 00:27:00.724 LBA Status Info Alert Notices: Not Supported 00:27:00.724 EGE Aggregate Log Change Notices: Not Supported 00:27:00.724 Normal NVM Subsystem Shutdown event: Not Supported 00:27:00.724 Zone Descriptor Change Notices: Not Supported 00:27:00.724 Discovery Log Change Notices: Supported 00:27:00.724 Controller Attributes 00:27:00.724 128-bit Host Identifier: Not Supported 00:27:00.724 Non-Operational Permissive Mode: Not Supported 00:27:00.724 NVM Sets: Not Supported 00:27:00.724 Read Recovery Levels: Not Supported 00:27:00.724 Endurance Groups: Not Supported 00:27:00.724 Predictable Latency Mode: Not Supported 00:27:00.724 Traffic Based Keep ALive: Not Supported 00:27:00.724 Namespace Granularity: Not Supported 00:27:00.724 SQ Associations: Not Supported 00:27:00.724 UUID List: Not Supported 00:27:00.724 Multi-Domain Subsystem: Not Supported 00:27:00.724 Fixed Capacity Management: Not Supported 00:27:00.724 Variable Capacity Management: Not Supported 00:27:00.724 Delete Endurance Group: Not Supported 00:27:00.724 Delete NVM Set: Not Supported 00:27:00.724 Extended LBA Formats Supported: Not Supported 00:27:00.724 Flexible Data Placement Supported: Not Supported 00:27:00.724 00:27:00.724 Controller Memory Buffer Support 00:27:00.724 ================================ 00:27:00.724 Supported: No 00:27:00.724 00:27:00.724 Persistent Memory Region Support 00:27:00.724 ================================ 00:27:00.724 Supported: No 00:27:00.724 00:27:00.724 Admin Command Set Attributes 00:27:00.724 ============================ 00:27:00.724 Security Send/Receive: Not Supported 00:27:00.724 Format NVM: Not Supported 00:27:00.724 Firmware Activate/Download: Not Supported 00:27:00.724 Namespace Management: Not Supported 00:27:00.724 Device Self-Test: Not Supported 00:27:00.724 Directives: Not Supported 00:27:00.724 NVMe-MI: Not Supported 00:27:00.724 Virtualization Management: Not Supported 00:27:00.724 Doorbell Buffer Config: Not Supported 00:27:00.724 Get LBA Status Capability: Not Supported 00:27:00.724 Command & Feature Lockdown Capability: Not Supported 00:27:00.724 Abort Command Limit: 1 00:27:00.724 Async Event Request Limit: 1 00:27:00.724 Number of Firmware Slots: N/A 00:27:00.724 Firmware Slot 1 Read-Only: N/A 00:27:00.724 Firmware Activation Without Reset: N/A 00:27:00.724 Multiple Update Detection Support: N/A 00:27:00.724 Firmware Update Granularity: No Information Provided 00:27:00.724 Per-Namespace SMART Log: No 00:27:00.724 Asymmetric Namespace Access Log Page: Not Supported 00:27:00.724 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:00.724 Command Effects Log Page: Not Supported 00:27:00.724 Get Log Page Extended Data: Supported 00:27:00.724 Telemetry Log Pages: Not Supported 00:27:00.724 Persistent Event Log Pages: Not Supported 00:27:00.724 Supported Log Pages Log Page: May Support 00:27:00.724 Commands Supported & Effects Log Page: Not Supported 00:27:00.724 Feature Identifiers & Effects Log Page:May Support 00:27:00.724 NVMe-MI Commands & Effects Log Page: May Support 00:27:00.724 Data Area 4 for Telemetry Log: Not Supported 00:27:00.724 Error Log Page Entries Supported: 1 00:27:00.724 Keep Alive: Not Supported 00:27:00.724 00:27:00.724 NVM Command Set Attributes 00:27:00.724 ========================== 00:27:00.724 Submission Queue Entry Size 00:27:00.724 Max: 1 00:27:00.724 Min: 1 00:27:00.724 Completion Queue Entry Size 00:27:00.724 Max: 1 00:27:00.724 Min: 1 00:27:00.724 Number of Namespaces: 0 00:27:00.724 Compare Command: Not Supported 00:27:00.724 Write Uncorrectable Command: Not Supported 00:27:00.724 Dataset Management Command: Not Supported 00:27:00.724 Write Zeroes Command: Not Supported 00:27:00.724 Set Features Save Field: Not Supported 00:27:00.724 Reservations: Not Supported 00:27:00.724 Timestamp: Not Supported 00:27:00.724 Copy: Not Supported 00:27:00.724 Volatile Write Cache: Not Present 00:27:00.724 Atomic Write Unit (Normal): 1 00:27:00.724 Atomic Write Unit (PFail): 1 00:27:00.724 Atomic Compare & Write Unit: 1 00:27:00.724 Fused Compare & Write: Not Supported 00:27:00.724 Scatter-Gather List 00:27:00.724 SGL Command Set: Supported 00:27:00.724 SGL Keyed: Not Supported 00:27:00.724 SGL Bit Bucket Descriptor: Not Supported 00:27:00.724 SGL Metadata Pointer: Not Supported 00:27:00.724 Oversized SGL: Not Supported 00:27:00.724 SGL Metadata Address: Not Supported 00:27:00.724 SGL Offset: Supported 00:27:00.724 Transport SGL Data Block: Not Supported 00:27:00.724 Replay Protected Memory Block: Not Supported 00:27:00.724 00:27:00.724 Firmware Slot Information 00:27:00.724 ========================= 00:27:00.724 Active slot: 0 00:27:00.724 00:27:00.724 00:27:00.724 Error Log 00:27:00.724 ========= 00:27:00.724 00:27:00.724 Active Namespaces 00:27:00.724 ================= 00:27:00.724 Discovery Log Page 00:27:00.724 ================== 00:27:00.724 Generation Counter: 2 00:27:00.724 Number of Records: 2 00:27:00.724 Record Format: 0 00:27:00.724 00:27:00.724 Discovery Log Entry 0 00:27:00.725 ---------------------- 00:27:00.725 Transport Type: 3 (TCP) 00:27:00.725 Address Family: 1 (IPv4) 00:27:00.725 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:00.725 Entry Flags: 00:27:00.725 Duplicate Returned Information: 0 00:27:00.725 Explicit Persistent Connection Support for Discovery: 0 00:27:00.725 Transport Requirements: 00:27:00.725 Secure Channel: Not Specified 00:27:00.725 Port ID: 1 (0x0001) 00:27:00.725 Controller ID: 65535 (0xffff) 00:27:00.725 Admin Max SQ Size: 32 00:27:00.725 Transport Service Identifier: 4420 00:27:00.725 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:00.725 Transport Address: 10.0.0.1 00:27:00.725 Discovery Log Entry 1 00:27:00.725 ---------------------- 00:27:00.725 Transport Type: 3 (TCP) 00:27:00.725 Address Family: 1 (IPv4) 00:27:00.725 Subsystem Type: 2 (NVM Subsystem) 00:27:00.725 Entry Flags: 00:27:00.725 Duplicate Returned Information: 0 00:27:00.725 Explicit Persistent Connection Support for Discovery: 0 00:27:00.725 Transport Requirements: 00:27:00.725 Secure Channel: Not Specified 00:27:00.725 Port ID: 1 (0x0001) 00:27:00.725 Controller ID: 65535 (0xffff) 00:27:00.725 Admin Max SQ Size: 32 00:27:00.725 Transport Service Identifier: 4420 00:27:00.725 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:00.725 Transport Address: 10.0.0.1 00:27:00.725 11:15:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:00.725 EAL: No free 2048 kB hugepages reported on node 1 00:27:00.725 get_feature(0x01) failed 00:27:00.725 get_feature(0x02) failed 00:27:00.725 get_feature(0x04) failed 00:27:00.725 ===================================================== 00:27:00.725 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:00.725 ===================================================== 00:27:00.725 Controller Capabilities/Features 00:27:00.725 ================================ 00:27:00.725 Vendor ID: 0000 00:27:00.725 Subsystem Vendor ID: 0000 00:27:00.725 Serial Number: 350012396cc61a8d39e7 00:27:00.725 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:00.725 Firmware Version: 6.7.0-68 00:27:00.725 Recommended Arb Burst: 6 00:27:00.725 IEEE OUI Identifier: 00 00 00 00:27:00.725 Multi-path I/O 00:27:00.725 May have multiple subsystem ports: Yes 00:27:00.725 May have multiple controllers: Yes 00:27:00.725 Associated with SR-IOV VF: No 00:27:00.725 Max Data Transfer Size: Unlimited 00:27:00.725 Max Number of Namespaces: 1024 00:27:00.725 Max Number of I/O Queues: 128 00:27:00.725 NVMe Specification Version (VS): 1.3 00:27:00.725 NVMe Specification Version (Identify): 1.3 00:27:00.725 Maximum Queue Entries: 1024 00:27:00.725 Contiguous Queues Required: No 00:27:00.725 Arbitration Mechanisms Supported 00:27:00.725 Weighted Round Robin: Not Supported 00:27:00.725 Vendor Specific: Not Supported 00:27:00.725 Reset Timeout: 7500 ms 00:27:00.725 Doorbell Stride: 4 bytes 00:27:00.725 NVM Subsystem Reset: Not Supported 00:27:00.725 Command Sets Supported 00:27:00.725 NVM Command Set: Supported 00:27:00.725 Boot Partition: Not Supported 00:27:00.725 Memory Page Size Minimum: 4096 bytes 00:27:00.725 Memory Page Size Maximum: 4096 bytes 00:27:00.725 Persistent Memory Region: Not Supported 00:27:00.725 Optional Asynchronous Events Supported 00:27:00.725 Namespace Attribute Notices: Supported 00:27:00.725 Firmware Activation Notices: Not Supported 00:27:00.725 ANA Change Notices: Supported 00:27:00.725 PLE Aggregate Log Change Notices: Not Supported 00:27:00.725 LBA Status Info Alert Notices: Not Supported 00:27:00.725 EGE Aggregate Log Change Notices: Not Supported 00:27:00.725 Normal NVM Subsystem Shutdown event: Not Supported 00:27:00.725 Zone Descriptor Change Notices: Not Supported 00:27:00.725 Discovery Log Change Notices: Not Supported 00:27:00.725 Controller Attributes 00:27:00.725 128-bit Host Identifier: Supported 00:27:00.725 Non-Operational Permissive Mode: Not Supported 00:27:00.725 NVM Sets: Not Supported 00:27:00.725 Read Recovery Levels: Not Supported 00:27:00.725 Endurance Groups: Not Supported 00:27:00.725 Predictable Latency Mode: Not Supported 00:27:00.725 Traffic Based Keep ALive: Supported 00:27:00.725 Namespace Granularity: Not Supported 00:27:00.725 SQ Associations: Not Supported 00:27:00.725 UUID List: Not Supported 00:27:00.725 Multi-Domain Subsystem: Not Supported 00:27:00.725 Fixed Capacity Management: Not Supported 00:27:00.725 Variable Capacity Management: Not Supported 00:27:00.725 Delete Endurance Group: Not Supported 00:27:00.725 Delete NVM Set: Not Supported 00:27:00.725 Extended LBA Formats Supported: Not Supported 00:27:00.725 Flexible Data Placement Supported: Not Supported 00:27:00.725 00:27:00.725 Controller Memory Buffer Support 00:27:00.725 ================================ 00:27:00.725 Supported: No 00:27:00.725 00:27:00.725 Persistent Memory Region Support 00:27:00.725 ================================ 00:27:00.725 Supported: No 00:27:00.725 00:27:00.725 Admin Command Set Attributes 00:27:00.725 ============================ 00:27:00.725 Security Send/Receive: Not Supported 00:27:00.725 Format NVM: Not Supported 00:27:00.725 Firmware Activate/Download: Not Supported 00:27:00.725 Namespace Management: Not Supported 00:27:00.725 Device Self-Test: Not Supported 00:27:00.725 Directives: Not Supported 00:27:00.725 NVMe-MI: Not Supported 00:27:00.725 Virtualization Management: Not Supported 00:27:00.725 Doorbell Buffer Config: Not Supported 00:27:00.725 Get LBA Status Capability: Not Supported 00:27:00.725 Command & Feature Lockdown Capability: Not Supported 00:27:00.725 Abort Command Limit: 4 00:27:00.725 Async Event Request Limit: 4 00:27:00.725 Number of Firmware Slots: N/A 00:27:00.725 Firmware Slot 1 Read-Only: N/A 00:27:00.725 Firmware Activation Without Reset: N/A 00:27:00.725 Multiple Update Detection Support: N/A 00:27:00.725 Firmware Update Granularity: No Information Provided 00:27:00.726 Per-Namespace SMART Log: Yes 00:27:00.726 Asymmetric Namespace Access Log Page: Supported 00:27:00.726 ANA Transition Time : 10 sec 00:27:00.726 00:27:00.726 Asymmetric Namespace Access Capabilities 00:27:00.726 ANA Optimized State : Supported 00:27:00.726 ANA Non-Optimized State : Supported 00:27:00.726 ANA Inaccessible State : Supported 00:27:00.726 ANA Persistent Loss State : Supported 00:27:00.726 ANA Change State : Supported 00:27:00.726 ANAGRPID is not changed : No 00:27:00.726 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:00.726 00:27:00.726 ANA Group Identifier Maximum : 128 00:27:00.726 Number of ANA Group Identifiers : 128 00:27:00.726 Max Number of Allowed Namespaces : 1024 00:27:00.726 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:00.726 Command Effects Log Page: Supported 00:27:00.726 Get Log Page Extended Data: Supported 00:27:00.726 Telemetry Log Pages: Not Supported 00:27:00.726 Persistent Event Log Pages: Not Supported 00:27:00.726 Supported Log Pages Log Page: May Support 00:27:00.726 Commands Supported & Effects Log Page: Not Supported 00:27:00.726 Feature Identifiers & Effects Log Page:May Support 00:27:00.726 NVMe-MI Commands & Effects Log Page: May Support 00:27:00.726 Data Area 4 for Telemetry Log: Not Supported 00:27:00.726 Error Log Page Entries Supported: 128 00:27:00.726 Keep Alive: Supported 00:27:00.726 Keep Alive Granularity: 1000 ms 00:27:00.726 00:27:00.726 NVM Command Set Attributes 00:27:00.726 ========================== 00:27:00.726 Submission Queue Entry Size 00:27:00.726 Max: 64 00:27:00.726 Min: 64 00:27:00.726 Completion Queue Entry Size 00:27:00.726 Max: 16 00:27:00.726 Min: 16 00:27:00.726 Number of Namespaces: 1024 00:27:00.726 Compare Command: Not Supported 00:27:00.726 Write Uncorrectable Command: Not Supported 00:27:00.726 Dataset Management Command: Supported 00:27:00.726 Write Zeroes Command: Supported 00:27:00.726 Set Features Save Field: Not Supported 00:27:00.726 Reservations: Not Supported 00:27:00.726 Timestamp: Not Supported 00:27:00.726 Copy: Not Supported 00:27:00.726 Volatile Write Cache: Present 00:27:00.726 Atomic Write Unit (Normal): 1 00:27:00.726 Atomic Write Unit (PFail): 1 00:27:00.726 Atomic Compare & Write Unit: 1 00:27:00.726 Fused Compare & Write: Not Supported 00:27:00.726 Scatter-Gather List 00:27:00.726 SGL Command Set: Supported 00:27:00.726 SGL Keyed: Not Supported 00:27:00.726 SGL Bit Bucket Descriptor: Not Supported 00:27:00.726 SGL Metadata Pointer: Not Supported 00:27:00.726 Oversized SGL: Not Supported 00:27:00.726 SGL Metadata Address: Not Supported 00:27:00.726 SGL Offset: Supported 00:27:00.726 Transport SGL Data Block: Not Supported 00:27:00.726 Replay Protected Memory Block: Not Supported 00:27:00.726 00:27:00.726 Firmware Slot Information 00:27:00.726 ========================= 00:27:00.726 Active slot: 0 00:27:00.726 00:27:00.726 Asymmetric Namespace Access 00:27:00.726 =========================== 00:27:00.726 Change Count : 0 00:27:00.726 Number of ANA Group Descriptors : 1 00:27:00.726 ANA Group Descriptor : 0 00:27:00.726 ANA Group ID : 1 00:27:00.726 Number of NSID Values : 1 00:27:00.726 Change Count : 0 00:27:00.726 ANA State : 1 00:27:00.726 Namespace Identifier : 1 00:27:00.726 00:27:00.726 Commands Supported and Effects 00:27:00.726 ============================== 00:27:00.726 Admin Commands 00:27:00.726 -------------- 00:27:00.726 Get Log Page (02h): Supported 00:27:00.726 Identify (06h): Supported 00:27:00.726 Abort (08h): Supported 00:27:00.726 Set Features (09h): Supported 00:27:00.726 Get Features (0Ah): Supported 00:27:00.726 Asynchronous Event Request (0Ch): Supported 00:27:00.726 Keep Alive (18h): Supported 00:27:00.726 I/O Commands 00:27:00.726 ------------ 00:27:00.726 Flush (00h): Supported 00:27:00.726 Write (01h): Supported LBA-Change 00:27:00.726 Read (02h): Supported 00:27:00.726 Write Zeroes (08h): Supported LBA-Change 00:27:00.726 Dataset Management (09h): Supported 00:27:00.726 00:27:00.726 Error Log 00:27:00.726 ========= 00:27:00.726 Entry: 0 00:27:00.726 Error Count: 0x3 00:27:00.726 Submission Queue Id: 0x0 00:27:00.726 Command Id: 0x5 00:27:00.726 Phase Bit: 0 00:27:00.726 Status Code: 0x2 00:27:00.726 Status Code Type: 0x0 00:27:00.726 Do Not Retry: 1 00:27:00.726 Error Location: 0x28 00:27:00.726 LBA: 0x0 00:27:00.726 Namespace: 0x0 00:27:00.726 Vendor Log Page: 0x0 00:27:00.726 ----------- 00:27:00.726 Entry: 1 00:27:00.726 Error Count: 0x2 00:27:00.726 Submission Queue Id: 0x0 00:27:00.726 Command Id: 0x5 00:27:00.726 Phase Bit: 0 00:27:00.726 Status Code: 0x2 00:27:00.726 Status Code Type: 0x0 00:27:00.726 Do Not Retry: 1 00:27:00.726 Error Location: 0x28 00:27:00.726 LBA: 0x0 00:27:00.726 Namespace: 0x0 00:27:00.726 Vendor Log Page: 0x0 00:27:00.726 ----------- 00:27:00.726 Entry: 2 00:27:00.726 Error Count: 0x1 00:27:00.726 Submission Queue Id: 0x0 00:27:00.726 Command Id: 0x4 00:27:00.726 Phase Bit: 0 00:27:00.726 Status Code: 0x2 00:27:00.726 Status Code Type: 0x0 00:27:00.726 Do Not Retry: 1 00:27:00.726 Error Location: 0x28 00:27:00.726 LBA: 0x0 00:27:00.726 Namespace: 0x0 00:27:00.726 Vendor Log Page: 0x0 00:27:00.726 00:27:00.726 Number of Queues 00:27:00.726 ================ 00:27:00.726 Number of I/O Submission Queues: 128 00:27:00.726 Number of I/O Completion Queues: 128 00:27:00.726 00:27:00.726 ZNS Specific Controller Data 00:27:00.726 ============================ 00:27:00.726 Zone Append Size Limit: 0 00:27:00.726 00:27:00.726 00:27:00.726 Active Namespaces 00:27:00.726 ================= 00:27:00.726 get_feature(0x05) failed 00:27:00.726 Namespace ID:1 00:27:00.726 Command Set Identifier: NVM (00h) 00:27:00.726 Deallocate: Supported 00:27:00.726 Deallocated/Unwritten Error: Not Supported 00:27:00.726 Deallocated Read Value: Unknown 00:27:00.726 Deallocate in Write Zeroes: Not Supported 00:27:00.726 Deallocated Guard Field: 0xFFFF 00:27:00.726 Flush: Supported 00:27:00.726 Reservation: Not Supported 00:27:00.726 Namespace Sharing Capabilities: Multiple Controllers 00:27:00.726 Size (in LBAs): 1953525168 (931GiB) 00:27:00.726 Capacity (in LBAs): 1953525168 (931GiB) 00:27:00.726 Utilization (in LBAs): 1953525168 (931GiB) 00:27:00.726 UUID: 26e09966-d4a4-434f-95cd-385c5256ee45 00:27:00.726 Thin Provisioning: Not Supported 00:27:00.726 Per-NS Atomic Units: Yes 00:27:00.726 Atomic Boundary Size (Normal): 0 00:27:00.726 Atomic Boundary Size (PFail): 0 00:27:00.726 Atomic Boundary Offset: 0 00:27:00.726 NGUID/EUI64 Never Reused: No 00:27:00.726 ANA group ID: 1 00:27:00.726 Namespace Write Protected: No 00:27:00.726 Number of LBA Formats: 1 00:27:00.726 Current LBA Format: LBA Format #00 00:27:00.726 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:00.726 00:27:00.726 11:15:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:00.726 11:15:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:00.726 11:15:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:27:00.726 11:15:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:00.726 11:15:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:27:00.726 11:15:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:00.726 11:15:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:00.726 rmmod nvme_tcp 00:27:00.726 rmmod nvme_fabrics 00:27:00.726 11:15:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:00.726 11:15:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:27:00.726 11:15:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:27:00.726 11:15:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:00.726 11:15:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:00.726 11:15:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:00.726 11:15:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:00.726 11:15:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:00.727 11:15:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:00.727 11:15:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:00.727 11:15:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:00.727 11:15:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:03.272 11:15:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:03.272 11:15:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:03.272 11:15:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:03.272 11:15:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:27:03.272 11:15:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:03.272 11:15:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:03.272 11:15:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:03.272 11:15:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:03.272 11:15:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:03.272 11:15:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:03.272 11:15:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:05.176 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:05.176 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:05.176 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:05.176 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:05.176 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:05.176 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:05.176 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:05.176 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:05.176 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:05.176 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:05.176 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:05.176 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:05.176 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:05.176 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:05.176 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:05.176 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:06.114 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:27:06.114 00:27:06.114 real 0m14.439s 00:27:06.114 user 0m3.334s 00:27:06.114 sys 0m7.221s 00:27:06.114 11:15:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:06.114 11:15:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:06.114 ************************************ 00:27:06.114 END TEST nvmf_identify_kernel_target 00:27:06.114 ************************************ 00:27:06.114 11:15:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:06.114 11:15:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:06.114 11:15:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:06.114 11:15:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.374 ************************************ 00:27:06.374 START TEST nvmf_auth_host 00:27:06.374 ************************************ 00:27:06.374 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:06.374 * Looking for test storage... 00:27:06.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:06.374 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:06.374 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:06.374 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:06.374 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:06.374 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:06.374 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:06.374 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:06.374 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:06.374 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:06.374 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:06.374 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:06.374 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:06.374 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:06.374 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:06.374 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:06.374 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:06.374 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:06.374 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:06.374 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:06.374 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:06.374 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:06.374 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:06.374 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.374 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.374 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.374 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:06.374 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.374 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:27:06.374 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:06.374 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:06.374 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:06.374 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:06.374 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:06.374 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:06.374 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:06.374 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:06.374 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:06.374 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:06.374 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:06.374 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:06.374 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:06.374 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:06.374 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:06.375 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:06.375 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:06.375 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:06.375 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:06.375 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:06.375 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:06.375 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:06.375 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:06.375 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:06.375 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:06.375 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:06.375 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:06.375 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:27:06.375 11:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:11.664 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:11.664 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:11.664 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:11.665 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:11.665 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:11.665 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:11.665 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:11.665 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:11.665 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:11.665 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:11.665 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:11.665 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:11.665 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:11.665 Found net devices under 0000:86:00.0: cvl_0_0 00:27:11.665 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:11.665 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:11.665 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:11.665 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:11.665 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:11.665 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:11.665 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:11.665 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:11.665 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:11.665 Found net devices under 0000:86:00.1: cvl_0_1 00:27:11.665 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:11.665 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:11.665 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:27:11.665 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:11.665 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:11.665 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:11.665 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:11.665 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:11.665 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:11.665 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:11.665 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:11.665 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:11.665 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:11.665 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:11.665 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:11.665 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:11.665 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:11.665 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:11.665 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:11.665 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:11.665 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:11.665 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:11.665 11:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:11.665 11:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:11.665 11:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:11.665 11:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:11.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:11.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:27:11.665 00:27:11.665 --- 10.0.0.2 ping statistics --- 00:27:11.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:11.665 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:27:11.665 11:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:11.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:11.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.465 ms 00:27:11.665 00:27:11.665 --- 10.0.0.1 ping statistics --- 00:27:11.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:11.665 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:27:11.665 11:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:11.665 11:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:27:11.665 11:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:11.665 11:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:11.665 11:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:11.665 11:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:11.665 11:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:11.665 11:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:11.665 11:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:11.665 11:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:11.665 11:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:11.665 11:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:11.665 11:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.665 11:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1583727 00:27:11.665 11:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1583727 00:27:11.665 11:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:11.665 11:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1583727 ']' 00:27:11.665 11:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:11.665 11:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:11.665 11:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:11.665 11:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:11.665 11:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.604 11:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:12.604 11:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:27:12.604 11:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:12.604 11:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:12.604 11:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.604 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:12.604 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:12.604 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:12.604 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:12.604 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:12.604 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:12.604 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:12.604 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:12.604 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:12.604 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=934e0fe3b0d2a994d3dfaa430449ccde 00:27:12.604 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:12.604 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.kO8 00:27:12.604 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 934e0fe3b0d2a994d3dfaa430449ccde 0 00:27:12.604 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 934e0fe3b0d2a994d3dfaa430449ccde 0 00:27:12.604 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:12.604 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:12.604 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=934e0fe3b0d2a994d3dfaa430449ccde 00:27:12.604 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:12.604 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:12.604 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.kO8 00:27:12.604 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.kO8 00:27:12.604 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.kO8 00:27:12.604 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:12.604 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:12.604 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:12.604 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:12.604 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:12.604 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:12.604 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:12.604 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=39e6725e3859a1e4c5776153362f7e3a95a240fbc369b093acf4855cb20a3e9a 00:27:12.604 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:12.604 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.XZr 00:27:12.604 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 39e6725e3859a1e4c5776153362f7e3a95a240fbc369b093acf4855cb20a3e9a 3 00:27:12.604 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 39e6725e3859a1e4c5776153362f7e3a95a240fbc369b093acf4855cb20a3e9a 3 00:27:12.604 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:12.604 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:12.604 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=39e6725e3859a1e4c5776153362f7e3a95a240fbc369b093acf4855cb20a3e9a 00:27:12.604 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:12.604 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:12.865 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.XZr 00:27:12.865 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.XZr 00:27:12.865 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.XZr 00:27:12.865 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:12.865 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:12.865 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:12.865 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:12.865 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:12.865 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:12.865 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:12.865 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=41826f65d57b641bd040fcb1a86ad091df67a7e18b622e0c 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.cVr 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 41826f65d57b641bd040fcb1a86ad091df67a7e18b622e0c 0 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 41826f65d57b641bd040fcb1a86ad091df67a7e18b622e0c 0 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=41826f65d57b641bd040fcb1a86ad091df67a7e18b622e0c 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.cVr 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.cVr 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.cVr 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9cba50dce63955a0e4598ea667f0c3ddc8d8a7979c17f829 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.E9s 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9cba50dce63955a0e4598ea667f0c3ddc8d8a7979c17f829 2 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9cba50dce63955a0e4598ea667f0c3ddc8d8a7979c17f829 2 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9cba50dce63955a0e4598ea667f0c3ddc8d8a7979c17f829 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.E9s 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.E9s 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.E9s 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4f2e4b48e792b60591abcfdf0ae8449a 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.8w4 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4f2e4b48e792b60591abcfdf0ae8449a 1 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4f2e4b48e792b60591abcfdf0ae8449a 1 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4f2e4b48e792b60591abcfdf0ae8449a 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.8w4 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.8w4 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.8w4 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=87f8ab1cade165ffddc78bf00b2c9260 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.oEd 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 87f8ab1cade165ffddc78bf00b2c9260 1 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 87f8ab1cade165ffddc78bf00b2c9260 1 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=87f8ab1cade165ffddc78bf00b2c9260 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:12.866 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:13.126 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.oEd 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.oEd 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.oEd 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8b6ef37dc3a781c1bdba5962d58e3461aa922171fa650880 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Ua0 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8b6ef37dc3a781c1bdba5962d58e3461aa922171fa650880 2 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8b6ef37dc3a781c1bdba5962d58e3461aa922171fa650880 2 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8b6ef37dc3a781c1bdba5962d58e3461aa922171fa650880 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Ua0 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Ua0 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Ua0 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=592d62276106052b84e3142f1b792271 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.fwJ 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 592d62276106052b84e3142f1b792271 0 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 592d62276106052b84e3142f1b792271 0 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=592d62276106052b84e3142f1b792271 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.fwJ 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.fwJ 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.fwJ 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3017015c0c3164b6a2a44b1ee9de56aaae0159b9cd3ce3edc092a9ea7e199b8c 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.mug 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3017015c0c3164b6a2a44b1ee9de56aaae0159b9cd3ce3edc092a9ea7e199b8c 3 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3017015c0c3164b6a2a44b1ee9de56aaae0159b9cd3ce3edc092a9ea7e199b8c 3 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3017015c0c3164b6a2a44b1ee9de56aaae0159b9cd3ce3edc092a9ea7e199b8c 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.mug 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.mug 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.mug 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1583727 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1583727 ']' 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:13.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:13.127 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.kO8 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.XZr ]] 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.XZr 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.cVr 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.E9s ]] 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.E9s 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.8w4 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.oEd ]] 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.oEd 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Ua0 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.fwJ ]] 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.fwJ 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.mug 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:13.388 11:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:15.931 Waiting for block devices as requested 00:27:15.931 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:27:16.191 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:16.191 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:16.191 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:16.191 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:16.451 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:16.451 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:16.451 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:16.451 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:16.711 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:16.711 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:16.711 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:16.711 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:16.971 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:16.971 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:16.971 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:17.231 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:17.802 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:17.802 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:17.802 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:17.802 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:17.802 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:17.802 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:17.802 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:17.802 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:17.802 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:17.802 No valid GPT data, bailing 00:27:17.802 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:17.802 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:27:17.802 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:27:17.802 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:17.802 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:17.802 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:17.802 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:17.802 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:17.802 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:17.802 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:27:17.802 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:17.802 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:27:17.802 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:17.802 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:27:17.802 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:27:17.802 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:27:17.802 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:17.802 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:27:17.802 00:27:17.802 Discovery Log Number of Records 2, Generation counter 2 00:27:17.802 =====Discovery Log Entry 0====== 00:27:17.802 trtype: tcp 00:27:17.802 adrfam: ipv4 00:27:17.802 subtype: current discovery subsystem 00:27:17.802 treq: not specified, sq flow control disable supported 00:27:17.802 portid: 1 00:27:17.802 trsvcid: 4420 00:27:17.802 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:17.802 traddr: 10.0.0.1 00:27:17.802 eflags: none 00:27:17.802 sectype: none 00:27:17.802 =====Discovery Log Entry 1====== 00:27:17.802 trtype: tcp 00:27:17.802 adrfam: ipv4 00:27:17.802 subtype: nvme subsystem 00:27:17.802 treq: not specified, sq flow control disable supported 00:27:17.802 portid: 1 00:27:17.802 trsvcid: 4420 00:27:17.802 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:17.802 traddr: 10.0.0.1 00:27:17.802 eflags: none 00:27:17.802 sectype: none 00:27:17.802 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:17.802 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:17.803 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:17.803 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:17.803 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.803 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:17.803 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:17.803 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:17.803 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDE4MjZmNjVkNTdiNjQxYmQwNDBmY2IxYTg2YWQwOTFkZjY3YTdlMThiNjIyZTBjKjJcbQ==: 00:27:17.803 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: 00:27:17.803 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:17.803 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:17.803 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDE4MjZmNjVkNTdiNjQxYmQwNDBmY2IxYTg2YWQwOTFkZjY3YTdlMThiNjIyZTBjKjJcbQ==: 00:27:17.803 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: ]] 00:27:17.803 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: 00:27:17.803 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:17.803 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:17.803 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:17.803 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:17.803 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:17.803 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.803 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:17.803 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:17.803 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:17.803 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.803 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:17.803 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.803 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.803 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.803 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.803 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:17.803 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:17.803 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:17.803 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.803 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.803 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:17.803 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.803 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:17.803 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:17.803 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:17.803 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:17.803 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.803 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.063 nvme0n1 00:27:18.063 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.063 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.063 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.063 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.063 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.063 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.063 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.063 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.063 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.063 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.063 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.063 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:18.063 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:18.063 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.063 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:18.063 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.063 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:18.063 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:18.063 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:18.063 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM0ZTBmZTNiMGQyYTk5NGQzZGZhYTQzMDQ0OWNjZGUYGqtY: 00:27:18.063 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzllNjcyNWUzODU5YTFlNGM1Nzc2MTUzMzYyZjdlM2E5NWEyNDBmYmMzNjliMDkzYWNmNDg1NWNiMjBhM2U5YVVf7H8=: 00:27:18.063 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:18.063 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:18.063 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM0ZTBmZTNiMGQyYTk5NGQzZGZhYTQzMDQ0OWNjZGUYGqtY: 00:27:18.063 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzllNjcyNWUzODU5YTFlNGM1Nzc2MTUzMzYyZjdlM2E5NWEyNDBmYmMzNjliMDkzYWNmNDg1NWNiMjBhM2U5YVVf7H8=: ]] 00:27:18.063 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzllNjcyNWUzODU5YTFlNGM1Nzc2MTUzMzYyZjdlM2E5NWEyNDBmYmMzNjliMDkzYWNmNDg1NWNiMjBhM2U5YVVf7H8=: 00:27:18.063 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:18.063 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.063 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:18.064 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:18.064 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:18.064 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.064 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:18.064 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.064 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.064 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.064 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.064 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.064 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.064 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.064 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.064 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.064 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.064 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.064 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.064 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.064 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.064 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:18.064 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.064 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.064 nvme0n1 00:27:18.064 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.064 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.064 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.064 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.064 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDE4MjZmNjVkNTdiNjQxYmQwNDBmY2IxYTg2YWQwOTFkZjY3YTdlMThiNjIyZTBjKjJcbQ==: 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDE4MjZmNjVkNTdiNjQxYmQwNDBmY2IxYTg2YWQwOTFkZjY3YTdlMThiNjIyZTBjKjJcbQ==: 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: ]] 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.325 nvme0n1 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGYyZTRiNDhlNzkyYjYwNTkxYWJjZmRmMGFlODQ0OWFxUEcB: 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODdmOGFiMWNhZGUxNjVmZmRkYzc4YmYwMGIyYzkyNjCpWZHn: 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:18.325 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGYyZTRiNDhlNzkyYjYwNTkxYWJjZmRmMGFlODQ0OWFxUEcB: 00:27:18.585 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODdmOGFiMWNhZGUxNjVmZmRkYzc4YmYwMGIyYzkyNjCpWZHn: ]] 00:27:18.585 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODdmOGFiMWNhZGUxNjVmZmRkYzc4YmYwMGIyYzkyNjCpWZHn: 00:27:18.585 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:18.585 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.585 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:18.585 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:18.585 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:18.585 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.585 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:18.585 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.585 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.585 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.586 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.586 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.586 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.586 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.586 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.586 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.586 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.586 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.586 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.586 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.586 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.586 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:18.586 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.586 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.586 nvme0n1 00:27:18.586 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.586 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.586 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.586 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.586 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.586 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.586 11:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.586 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.586 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.586 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.586 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.586 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.586 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:18.586 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.586 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:18.586 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:18.586 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:18.586 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGI2ZWYzN2RjM2E3ODFjMWJkYmE1OTYyZDU4ZTM0NjFhYTkyMjE3MWZhNjUwODgwL6EEtQ==: 00:27:18.586 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTkyZDYyMjc2MTA2MDUyYjg0ZTMxNDJmMWI3OTIyNzFneObN: 00:27:18.586 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:18.586 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:18.586 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGI2ZWYzN2RjM2E3ODFjMWJkYmE1OTYyZDU4ZTM0NjFhYTkyMjE3MWZhNjUwODgwL6EEtQ==: 00:27:18.586 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTkyZDYyMjc2MTA2MDUyYjg0ZTMxNDJmMWI3OTIyNzFneObN: ]] 00:27:18.586 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTkyZDYyMjc2MTA2MDUyYjg0ZTMxNDJmMWI3OTIyNzFneObN: 00:27:18.586 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:18.586 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.586 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:18.586 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:18.586 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:18.586 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.586 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:18.586 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.586 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.586 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.586 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.586 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.586 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.586 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.586 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.586 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.586 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.586 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.586 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.586 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.586 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.586 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:18.586 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.586 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.847 nvme0n1 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzAxNzAxNWMwYzMxNjRiNmEyYTQ0YjFlZTlkZTU2YWFhZTAxNTliOWNkM2NlM2VkYzA5MmE5ZWE3ZTE5OWI4Y46t3gU=: 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzAxNzAxNWMwYzMxNjRiNmEyYTQ0YjFlZTlkZTU2YWFhZTAxNTliOWNkM2NlM2VkYzA5MmE5ZWE3ZTE5OWI4Y46t3gU=: 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.847 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.108 nvme0n1 00:27:19.108 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.108 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.108 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.108 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.108 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.108 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.108 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.108 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.108 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.108 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.108 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.108 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:19.108 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.108 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:19.108 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.108 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.108 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:19.108 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:19.108 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM0ZTBmZTNiMGQyYTk5NGQzZGZhYTQzMDQ0OWNjZGUYGqtY: 00:27:19.108 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzllNjcyNWUzODU5YTFlNGM1Nzc2MTUzMzYyZjdlM2E5NWEyNDBmYmMzNjliMDkzYWNmNDg1NWNiMjBhM2U5YVVf7H8=: 00:27:19.108 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.108 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:19.108 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM0ZTBmZTNiMGQyYTk5NGQzZGZhYTQzMDQ0OWNjZGUYGqtY: 00:27:19.108 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzllNjcyNWUzODU5YTFlNGM1Nzc2MTUzMzYyZjdlM2E5NWEyNDBmYmMzNjliMDkzYWNmNDg1NWNiMjBhM2U5YVVf7H8=: ]] 00:27:19.108 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzllNjcyNWUzODU5YTFlNGM1Nzc2MTUzMzYyZjdlM2E5NWEyNDBmYmMzNjliMDkzYWNmNDg1NWNiMjBhM2U5YVVf7H8=: 00:27:19.108 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:19.108 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.108 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.108 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:19.109 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:19.109 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.109 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:19.109 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.109 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.109 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.109 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.109 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:19.109 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:19.109 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:19.109 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.109 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.109 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:19.109 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.109 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:19.109 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:19.109 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:19.109 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:19.109 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.109 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.369 nvme0n1 00:27:19.369 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.369 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.369 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.369 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.369 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.369 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.369 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.369 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.369 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.369 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.369 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.369 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.369 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:19.369 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.369 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.369 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:19.369 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:19.369 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDE4MjZmNjVkNTdiNjQxYmQwNDBmY2IxYTg2YWQwOTFkZjY3YTdlMThiNjIyZTBjKjJcbQ==: 00:27:19.369 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: 00:27:19.369 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.369 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:19.369 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDE4MjZmNjVkNTdiNjQxYmQwNDBmY2IxYTg2YWQwOTFkZjY3YTdlMThiNjIyZTBjKjJcbQ==: 00:27:19.369 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: ]] 00:27:19.369 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: 00:27:19.369 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:19.369 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.369 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.369 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:19.369 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:19.369 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.369 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:19.369 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.369 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.369 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.369 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.369 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:19.369 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:19.369 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:19.369 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.369 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.369 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:19.369 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.370 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:19.370 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:19.370 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:19.370 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:19.370 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.370 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.370 nvme0n1 00:27:19.370 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.370 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.370 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.370 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.370 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.631 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.631 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.631 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.631 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.631 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.631 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.631 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.631 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:19.631 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.631 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.631 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:19.631 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:19.631 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGYyZTRiNDhlNzkyYjYwNTkxYWJjZmRmMGFlODQ0OWFxUEcB: 00:27:19.631 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODdmOGFiMWNhZGUxNjVmZmRkYzc4YmYwMGIyYzkyNjCpWZHn: 00:27:19.631 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.631 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:19.631 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGYyZTRiNDhlNzkyYjYwNTkxYWJjZmRmMGFlODQ0OWFxUEcB: 00:27:19.631 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODdmOGFiMWNhZGUxNjVmZmRkYzc4YmYwMGIyYzkyNjCpWZHn: ]] 00:27:19.631 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODdmOGFiMWNhZGUxNjVmZmRkYzc4YmYwMGIyYzkyNjCpWZHn: 00:27:19.631 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:19.631 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.631 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.631 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:19.631 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:19.631 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.631 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:19.631 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.631 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.631 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.631 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.631 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:19.631 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:19.631 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:19.631 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.631 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.631 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:19.631 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.631 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:19.631 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:19.631 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:19.631 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:19.631 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.631 11:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.631 nvme0n1 00:27:19.631 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.631 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.631 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.631 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.631 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.631 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.891 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.891 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.891 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.891 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.891 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.891 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.891 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:19.891 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.891 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.891 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:19.891 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:19.891 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGI2ZWYzN2RjM2E3ODFjMWJkYmE1OTYyZDU4ZTM0NjFhYTkyMjE3MWZhNjUwODgwL6EEtQ==: 00:27:19.891 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTkyZDYyMjc2MTA2MDUyYjg0ZTMxNDJmMWI3OTIyNzFneObN: 00:27:19.891 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.891 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:19.891 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGI2ZWYzN2RjM2E3ODFjMWJkYmE1OTYyZDU4ZTM0NjFhYTkyMjE3MWZhNjUwODgwL6EEtQ==: 00:27:19.891 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTkyZDYyMjc2MTA2MDUyYjg0ZTMxNDJmMWI3OTIyNzFneObN: ]] 00:27:19.891 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTkyZDYyMjc2MTA2MDUyYjg0ZTMxNDJmMWI3OTIyNzFneObN: 00:27:19.891 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:19.891 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.891 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.891 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:19.891 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:19.891 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.891 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:19.892 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.892 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.892 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.892 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.892 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:19.892 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:19.892 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:19.892 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.892 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.892 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:19.892 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.892 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:19.892 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:19.892 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:19.892 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:19.892 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.892 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.892 nvme0n1 00:27:19.892 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.892 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.892 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.892 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.892 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.892 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.892 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.892 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.892 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.892 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzAxNzAxNWMwYzMxNjRiNmEyYTQ0YjFlZTlkZTU2YWFhZTAxNTliOWNkM2NlM2VkYzA5MmE5ZWE3ZTE5OWI4Y46t3gU=: 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzAxNzAxNWMwYzMxNjRiNmEyYTQ0YjFlZTlkZTU2YWFhZTAxNTliOWNkM2NlM2VkYzA5MmE5ZWE3ZTE5OWI4Y46t3gU=: 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.154 nvme0n1 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM0ZTBmZTNiMGQyYTk5NGQzZGZhYTQzMDQ0OWNjZGUYGqtY: 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzllNjcyNWUzODU5YTFlNGM1Nzc2MTUzMzYyZjdlM2E5NWEyNDBmYmMzNjliMDkzYWNmNDg1NWNiMjBhM2U5YVVf7H8=: 00:27:20.154 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.155 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:20.155 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM0ZTBmZTNiMGQyYTk5NGQzZGZhYTQzMDQ0OWNjZGUYGqtY: 00:27:20.155 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzllNjcyNWUzODU5YTFlNGM1Nzc2MTUzMzYyZjdlM2E5NWEyNDBmYmMzNjliMDkzYWNmNDg1NWNiMjBhM2U5YVVf7H8=: ]] 00:27:20.155 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzllNjcyNWUzODU5YTFlNGM1Nzc2MTUzMzYyZjdlM2E5NWEyNDBmYmMzNjliMDkzYWNmNDg1NWNiMjBhM2U5YVVf7H8=: 00:27:20.155 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:20.155 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.155 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.155 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:20.155 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:20.155 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.155 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:20.155 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.155 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.155 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.416 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.416 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:20.416 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:20.416 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:20.416 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.416 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.416 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:20.416 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.416 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:20.416 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:20.416 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:20.416 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:20.416 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.416 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.416 nvme0n1 00:27:20.416 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.416 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.416 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.416 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.416 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.416 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.416 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.416 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.416 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.416 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.416 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.416 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.416 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:20.416 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.416 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.416 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:20.416 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:20.416 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDE4MjZmNjVkNTdiNjQxYmQwNDBmY2IxYTg2YWQwOTFkZjY3YTdlMThiNjIyZTBjKjJcbQ==: 00:27:20.416 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: 00:27:20.416 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.416 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:20.416 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDE4MjZmNjVkNTdiNjQxYmQwNDBmY2IxYTg2YWQwOTFkZjY3YTdlMThiNjIyZTBjKjJcbQ==: 00:27:20.676 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: ]] 00:27:20.676 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: 00:27:20.676 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:20.676 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.676 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.676 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:20.676 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:20.676 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.676 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:20.676 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.676 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.676 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.676 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.676 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:20.676 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:20.676 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:20.676 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.676 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.676 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:20.676 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.676 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:20.676 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:20.676 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:20.676 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:20.676 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.676 11:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.676 nvme0n1 00:27:20.676 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.677 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.677 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.677 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.677 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.677 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.937 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.937 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.937 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.937 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.937 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.937 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.937 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:20.937 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.937 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.937 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:20.937 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:20.937 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGYyZTRiNDhlNzkyYjYwNTkxYWJjZmRmMGFlODQ0OWFxUEcB: 00:27:20.937 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODdmOGFiMWNhZGUxNjVmZmRkYzc4YmYwMGIyYzkyNjCpWZHn: 00:27:20.937 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.937 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:20.937 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGYyZTRiNDhlNzkyYjYwNTkxYWJjZmRmMGFlODQ0OWFxUEcB: 00:27:20.937 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODdmOGFiMWNhZGUxNjVmZmRkYzc4YmYwMGIyYzkyNjCpWZHn: ]] 00:27:20.937 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODdmOGFiMWNhZGUxNjVmZmRkYzc4YmYwMGIyYzkyNjCpWZHn: 00:27:20.937 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:20.937 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.937 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.937 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:20.937 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:20.937 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.937 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:20.937 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.937 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.937 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.937 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.937 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:20.937 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:20.937 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:20.937 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.937 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.937 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:20.938 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.938 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:20.938 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:20.938 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:20.938 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:20.938 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.938 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.198 nvme0n1 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGI2ZWYzN2RjM2E3ODFjMWJkYmE1OTYyZDU4ZTM0NjFhYTkyMjE3MWZhNjUwODgwL6EEtQ==: 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTkyZDYyMjc2MTA2MDUyYjg0ZTMxNDJmMWI3OTIyNzFneObN: 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGI2ZWYzN2RjM2E3ODFjMWJkYmE1OTYyZDU4ZTM0NjFhYTkyMjE3MWZhNjUwODgwL6EEtQ==: 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTkyZDYyMjc2MTA2MDUyYjg0ZTMxNDJmMWI3OTIyNzFneObN: ]] 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTkyZDYyMjc2MTA2MDUyYjg0ZTMxNDJmMWI3OTIyNzFneObN: 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.198 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.458 nvme0n1 00:27:21.458 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.458 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.458 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.458 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.458 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.458 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.458 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.458 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.458 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.458 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.458 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.458 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.458 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:21.458 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.458 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.458 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:21.458 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:21.458 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzAxNzAxNWMwYzMxNjRiNmEyYTQ0YjFlZTlkZTU2YWFhZTAxNTliOWNkM2NlM2VkYzA5MmE5ZWE3ZTE5OWI4Y46t3gU=: 00:27:21.458 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:21.458 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.458 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:21.458 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzAxNzAxNWMwYzMxNjRiNmEyYTQ0YjFlZTlkZTU2YWFhZTAxNTliOWNkM2NlM2VkYzA5MmE5ZWE3ZTE5OWI4Y46t3gU=: 00:27:21.458 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:21.458 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:21.458 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.458 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.458 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:21.458 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:21.458 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.459 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:21.459 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.459 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.459 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.459 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.459 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:21.459 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:21.459 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:21.459 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.459 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.459 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:21.459 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.459 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:21.459 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:21.459 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:21.459 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:21.459 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.459 11:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.720 nvme0n1 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM0ZTBmZTNiMGQyYTk5NGQzZGZhYTQzMDQ0OWNjZGUYGqtY: 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzllNjcyNWUzODU5YTFlNGM1Nzc2MTUzMzYyZjdlM2E5NWEyNDBmYmMzNjliMDkzYWNmNDg1NWNiMjBhM2U5YVVf7H8=: 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM0ZTBmZTNiMGQyYTk5NGQzZGZhYTQzMDQ0OWNjZGUYGqtY: 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzllNjcyNWUzODU5YTFlNGM1Nzc2MTUzMzYyZjdlM2E5NWEyNDBmYmMzNjliMDkzYWNmNDg1NWNiMjBhM2U5YVVf7H8=: ]] 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzllNjcyNWUzODU5YTFlNGM1Nzc2MTUzMzYyZjdlM2E5NWEyNDBmYmMzNjliMDkzYWNmNDg1NWNiMjBhM2U5YVVf7H8=: 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.720 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.982 nvme0n1 00:27:21.982 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDE4MjZmNjVkNTdiNjQxYmQwNDBmY2IxYTg2YWQwOTFkZjY3YTdlMThiNjIyZTBjKjJcbQ==: 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDE4MjZmNjVkNTdiNjQxYmQwNDBmY2IxYTg2YWQwOTFkZjY3YTdlMThiNjIyZTBjKjJcbQ==: 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: ]] 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.243 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.504 nvme0n1 00:27:22.504 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.504 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.504 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.504 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.504 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.504 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.504 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.504 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.504 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.504 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.504 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.504 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.504 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:22.504 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.504 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:22.504 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:22.504 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:22.504 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGYyZTRiNDhlNzkyYjYwNTkxYWJjZmRmMGFlODQ0OWFxUEcB: 00:27:22.504 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODdmOGFiMWNhZGUxNjVmZmRkYzc4YmYwMGIyYzkyNjCpWZHn: 00:27:22.505 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:22.505 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:22.505 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGYyZTRiNDhlNzkyYjYwNTkxYWJjZmRmMGFlODQ0OWFxUEcB: 00:27:22.505 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODdmOGFiMWNhZGUxNjVmZmRkYzc4YmYwMGIyYzkyNjCpWZHn: ]] 00:27:22.505 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODdmOGFiMWNhZGUxNjVmZmRkYzc4YmYwMGIyYzkyNjCpWZHn: 00:27:22.505 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:22.505 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.505 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:22.505 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:22.505 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:22.505 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.505 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:22.505 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.505 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.505 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.505 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.505 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:22.505 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:22.505 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:22.505 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.505 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.505 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:22.505 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.505 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:22.505 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:22.505 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:22.505 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:22.505 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.505 11:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.076 nvme0n1 00:27:23.076 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.076 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.076 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.076 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.076 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.076 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.076 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.076 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.076 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.076 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.076 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.076 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.076 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:23.077 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.077 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:23.077 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:23.077 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:23.077 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGI2ZWYzN2RjM2E3ODFjMWJkYmE1OTYyZDU4ZTM0NjFhYTkyMjE3MWZhNjUwODgwL6EEtQ==: 00:27:23.077 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTkyZDYyMjc2MTA2MDUyYjg0ZTMxNDJmMWI3OTIyNzFneObN: 00:27:23.077 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:23.077 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:23.077 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGI2ZWYzN2RjM2E3ODFjMWJkYmE1OTYyZDU4ZTM0NjFhYTkyMjE3MWZhNjUwODgwL6EEtQ==: 00:27:23.077 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTkyZDYyMjc2MTA2MDUyYjg0ZTMxNDJmMWI3OTIyNzFneObN: ]] 00:27:23.077 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTkyZDYyMjc2MTA2MDUyYjg0ZTMxNDJmMWI3OTIyNzFneObN: 00:27:23.077 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:23.077 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.077 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:23.077 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:23.077 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:23.077 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.077 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:23.077 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.077 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.077 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.077 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.077 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:23.077 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:23.077 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:23.077 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.077 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.077 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:23.077 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.077 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:23.077 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:23.077 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:23.077 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:23.077 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.077 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.338 nvme0n1 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzAxNzAxNWMwYzMxNjRiNmEyYTQ0YjFlZTlkZTU2YWFhZTAxNTliOWNkM2NlM2VkYzA5MmE5ZWE3ZTE5OWI4Y46t3gU=: 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzAxNzAxNWMwYzMxNjRiNmEyYTQ0YjFlZTlkZTU2YWFhZTAxNTliOWNkM2NlM2VkYzA5MmE5ZWE3ZTE5OWI4Y46t3gU=: 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.338 11:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.910 nvme0n1 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM0ZTBmZTNiMGQyYTk5NGQzZGZhYTQzMDQ0OWNjZGUYGqtY: 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzllNjcyNWUzODU5YTFlNGM1Nzc2MTUzMzYyZjdlM2E5NWEyNDBmYmMzNjliMDkzYWNmNDg1NWNiMjBhM2U5YVVf7H8=: 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM0ZTBmZTNiMGQyYTk5NGQzZGZhYTQzMDQ0OWNjZGUYGqtY: 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzllNjcyNWUzODU5YTFlNGM1Nzc2MTUzMzYyZjdlM2E5NWEyNDBmYmMzNjliMDkzYWNmNDg1NWNiMjBhM2U5YVVf7H8=: ]] 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzllNjcyNWUzODU5YTFlNGM1Nzc2MTUzMzYyZjdlM2E5NWEyNDBmYmMzNjliMDkzYWNmNDg1NWNiMjBhM2U5YVVf7H8=: 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.910 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.478 nvme0n1 00:27:24.478 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.478 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.478 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.478 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.478 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.478 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.478 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.478 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.478 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.478 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.478 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.478 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.478 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:24.478 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.478 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:24.478 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:24.478 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:24.478 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDE4MjZmNjVkNTdiNjQxYmQwNDBmY2IxYTg2YWQwOTFkZjY3YTdlMThiNjIyZTBjKjJcbQ==: 00:27:24.478 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: 00:27:24.478 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:24.478 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:24.478 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDE4MjZmNjVkNTdiNjQxYmQwNDBmY2IxYTg2YWQwOTFkZjY3YTdlMThiNjIyZTBjKjJcbQ==: 00:27:24.478 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: ]] 00:27:24.478 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: 00:27:24.478 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:24.479 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.479 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:24.479 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:24.479 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:24.479 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.479 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:24.479 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.479 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.479 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.479 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.479 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:24.479 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:24.479 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:24.479 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.479 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.479 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:24.479 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.479 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:24.479 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:24.479 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:24.479 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:24.479 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.479 11:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.049 nvme0n1 00:27:25.049 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.049 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.049 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.049 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.049 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.049 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.049 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.049 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.050 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.050 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.050 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.050 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.050 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:25.050 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.050 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:25.050 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:25.050 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:25.050 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGYyZTRiNDhlNzkyYjYwNTkxYWJjZmRmMGFlODQ0OWFxUEcB: 00:27:25.050 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODdmOGFiMWNhZGUxNjVmZmRkYzc4YmYwMGIyYzkyNjCpWZHn: 00:27:25.050 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:25.050 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:25.050 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGYyZTRiNDhlNzkyYjYwNTkxYWJjZmRmMGFlODQ0OWFxUEcB: 00:27:25.050 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODdmOGFiMWNhZGUxNjVmZmRkYzc4YmYwMGIyYzkyNjCpWZHn: ]] 00:27:25.050 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODdmOGFiMWNhZGUxNjVmZmRkYzc4YmYwMGIyYzkyNjCpWZHn: 00:27:25.050 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:25.050 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.050 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:25.050 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:25.050 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:25.050 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.050 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:25.050 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.050 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.050 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.050 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.050 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:25.050 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:25.050 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:25.050 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.050 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.050 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:25.050 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.050 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:25.050 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:25.050 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:25.050 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:25.050 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.050 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.620 nvme0n1 00:27:25.620 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.620 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.620 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.620 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.620 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.620 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.620 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.620 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.620 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.620 11:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.620 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.620 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.620 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:25.620 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.620 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:25.620 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:25.620 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:25.620 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGI2ZWYzN2RjM2E3ODFjMWJkYmE1OTYyZDU4ZTM0NjFhYTkyMjE3MWZhNjUwODgwL6EEtQ==: 00:27:25.621 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTkyZDYyMjc2MTA2MDUyYjg0ZTMxNDJmMWI3OTIyNzFneObN: 00:27:25.621 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:25.621 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:25.621 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGI2ZWYzN2RjM2E3ODFjMWJkYmE1OTYyZDU4ZTM0NjFhYTkyMjE3MWZhNjUwODgwL6EEtQ==: 00:27:25.621 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTkyZDYyMjc2MTA2MDUyYjg0ZTMxNDJmMWI3OTIyNzFneObN: ]] 00:27:25.621 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTkyZDYyMjc2MTA2MDUyYjg0ZTMxNDJmMWI3OTIyNzFneObN: 00:27:25.621 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:25.621 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.621 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:25.621 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:25.621 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:25.621 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.621 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:25.621 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.621 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.621 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.621 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.621 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:25.621 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:25.621 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:25.621 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.621 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.621 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:25.621 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.621 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:25.621 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:25.621 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:25.621 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:25.621 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.621 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.225 nvme0n1 00:27:26.225 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.225 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.225 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.225 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.225 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.225 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.225 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.225 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.225 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.225 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.225 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.225 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.225 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:26.225 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.225 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:26.225 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:26.225 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:26.225 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzAxNzAxNWMwYzMxNjRiNmEyYTQ0YjFlZTlkZTU2YWFhZTAxNTliOWNkM2NlM2VkYzA5MmE5ZWE3ZTE5OWI4Y46t3gU=: 00:27:26.225 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:26.225 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:26.225 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:26.225 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzAxNzAxNWMwYzMxNjRiNmEyYTQ0YjFlZTlkZTU2YWFhZTAxNTliOWNkM2NlM2VkYzA5MmE5ZWE3ZTE5OWI4Y46t3gU=: 00:27:26.225 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:26.225 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:26.225 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.225 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:26.225 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:26.225 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:26.225 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.225 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:26.226 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.226 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.226 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.226 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.226 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:26.226 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:26.226 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.226 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.226 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.226 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.226 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.226 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.226 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.226 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.226 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:26.226 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.226 11:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.798 nvme0n1 00:27:26.798 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.798 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.798 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.798 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.798 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.798 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.798 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.798 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.798 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.798 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.798 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.799 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:26.799 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:26.799 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.799 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:26.799 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.799 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:26.799 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:26.799 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:26.799 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM0ZTBmZTNiMGQyYTk5NGQzZGZhYTQzMDQ0OWNjZGUYGqtY: 00:27:26.799 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzllNjcyNWUzODU5YTFlNGM1Nzc2MTUzMzYyZjdlM2E5NWEyNDBmYmMzNjliMDkzYWNmNDg1NWNiMjBhM2U5YVVf7H8=: 00:27:26.799 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:26.799 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:26.799 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM0ZTBmZTNiMGQyYTk5NGQzZGZhYTQzMDQ0OWNjZGUYGqtY: 00:27:26.799 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzllNjcyNWUzODU5YTFlNGM1Nzc2MTUzMzYyZjdlM2E5NWEyNDBmYmMzNjliMDkzYWNmNDg1NWNiMjBhM2U5YVVf7H8=: ]] 00:27:26.799 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzllNjcyNWUzODU5YTFlNGM1Nzc2MTUzMzYyZjdlM2E5NWEyNDBmYmMzNjliMDkzYWNmNDg1NWNiMjBhM2U5YVVf7H8=: 00:27:26.799 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:26.799 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.799 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:26.799 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:26.799 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:26.799 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.799 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:26.799 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.799 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.799 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.799 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.799 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:26.799 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:26.799 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.799 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.799 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.799 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.799 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.799 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.799 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.799 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.799 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:26.799 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.799 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.059 nvme0n1 00:27:27.059 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.059 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.059 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.059 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.059 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.059 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.059 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.059 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.059 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.059 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.059 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.059 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.059 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:27.059 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.059 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:27.059 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:27.059 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:27.059 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDE4MjZmNjVkNTdiNjQxYmQwNDBmY2IxYTg2YWQwOTFkZjY3YTdlMThiNjIyZTBjKjJcbQ==: 00:27:27.059 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: 00:27:27.059 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:27.059 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:27.059 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDE4MjZmNjVkNTdiNjQxYmQwNDBmY2IxYTg2YWQwOTFkZjY3YTdlMThiNjIyZTBjKjJcbQ==: 00:27:27.059 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: ]] 00:27:27.060 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: 00:27:27.060 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:27.060 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.060 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:27.060 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:27.060 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:27.060 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.060 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:27.060 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.060 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.060 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.060 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.060 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:27.060 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:27.060 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:27.060 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.060 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.060 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:27.060 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.060 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:27.060 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:27.060 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:27.060 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:27.060 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.060 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.320 nvme0n1 00:27:27.320 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.320 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.320 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.320 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.320 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.320 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.320 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGYyZTRiNDhlNzkyYjYwNTkxYWJjZmRmMGFlODQ0OWFxUEcB: 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODdmOGFiMWNhZGUxNjVmZmRkYzc4YmYwMGIyYzkyNjCpWZHn: 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGYyZTRiNDhlNzkyYjYwNTkxYWJjZmRmMGFlODQ0OWFxUEcB: 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODdmOGFiMWNhZGUxNjVmZmRkYzc4YmYwMGIyYzkyNjCpWZHn: ]] 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODdmOGFiMWNhZGUxNjVmZmRkYzc4YmYwMGIyYzkyNjCpWZHn: 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.321 nvme0n1 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGI2ZWYzN2RjM2E3ODFjMWJkYmE1OTYyZDU4ZTM0NjFhYTkyMjE3MWZhNjUwODgwL6EEtQ==: 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTkyZDYyMjc2MTA2MDUyYjg0ZTMxNDJmMWI3OTIyNzFneObN: 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGI2ZWYzN2RjM2E3ODFjMWJkYmE1OTYyZDU4ZTM0NjFhYTkyMjE3MWZhNjUwODgwL6EEtQ==: 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTkyZDYyMjc2MTA2MDUyYjg0ZTMxNDJmMWI3OTIyNzFneObN: ]] 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTkyZDYyMjc2MTA2MDUyYjg0ZTMxNDJmMWI3OTIyNzFneObN: 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.321 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.581 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.582 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.582 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:27.582 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:27.582 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:27.582 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.582 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.582 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:27.582 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.582 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:27.582 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:27.582 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:27.582 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:27.582 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.582 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.582 nvme0n1 00:27:27.582 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.582 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.582 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.582 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.582 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.582 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.582 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.582 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.582 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.582 11:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.582 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.582 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.582 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:27.582 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.582 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:27.582 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:27.582 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:27.582 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzAxNzAxNWMwYzMxNjRiNmEyYTQ0YjFlZTlkZTU2YWFhZTAxNTliOWNkM2NlM2VkYzA5MmE5ZWE3ZTE5OWI4Y46t3gU=: 00:27:27.582 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:27.582 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:27.582 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:27.582 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzAxNzAxNWMwYzMxNjRiNmEyYTQ0YjFlZTlkZTU2YWFhZTAxNTliOWNkM2NlM2VkYzA5MmE5ZWE3ZTE5OWI4Y46t3gU=: 00:27:27.582 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:27.582 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:27.582 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.582 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:27.582 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:27.582 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:27.582 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.582 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:27.582 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.582 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.582 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.582 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.582 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:27.582 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:27.582 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:27.582 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.582 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.582 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:27.582 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.582 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:27.582 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:27.582 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:27.582 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:27.582 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.582 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.843 nvme0n1 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM0ZTBmZTNiMGQyYTk5NGQzZGZhYTQzMDQ0OWNjZGUYGqtY: 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzllNjcyNWUzODU5YTFlNGM1Nzc2MTUzMzYyZjdlM2E5NWEyNDBmYmMzNjliMDkzYWNmNDg1NWNiMjBhM2U5YVVf7H8=: 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM0ZTBmZTNiMGQyYTk5NGQzZGZhYTQzMDQ0OWNjZGUYGqtY: 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzllNjcyNWUzODU5YTFlNGM1Nzc2MTUzMzYyZjdlM2E5NWEyNDBmYmMzNjliMDkzYWNmNDg1NWNiMjBhM2U5YVVf7H8=: ]] 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzllNjcyNWUzODU5YTFlNGM1Nzc2MTUzMzYyZjdlM2E5NWEyNDBmYmMzNjliMDkzYWNmNDg1NWNiMjBhM2U5YVVf7H8=: 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.843 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.104 nvme0n1 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDE4MjZmNjVkNTdiNjQxYmQwNDBmY2IxYTg2YWQwOTFkZjY3YTdlMThiNjIyZTBjKjJcbQ==: 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDE4MjZmNjVkNTdiNjQxYmQwNDBmY2IxYTg2YWQwOTFkZjY3YTdlMThiNjIyZTBjKjJcbQ==: 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: ]] 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.104 nvme0n1 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.104 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.364 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.364 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.364 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.364 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.364 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.364 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.364 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGYyZTRiNDhlNzkyYjYwNTkxYWJjZmRmMGFlODQ0OWFxUEcB: 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODdmOGFiMWNhZGUxNjVmZmRkYzc4YmYwMGIyYzkyNjCpWZHn: 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGYyZTRiNDhlNzkyYjYwNTkxYWJjZmRmMGFlODQ0OWFxUEcB: 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODdmOGFiMWNhZGUxNjVmZmRkYzc4YmYwMGIyYzkyNjCpWZHn: ]] 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODdmOGFiMWNhZGUxNjVmZmRkYzc4YmYwMGIyYzkyNjCpWZHn: 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.365 nvme0n1 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:28.365 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:28.625 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:28.625 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGI2ZWYzN2RjM2E3ODFjMWJkYmE1OTYyZDU4ZTM0NjFhYTkyMjE3MWZhNjUwODgwL6EEtQ==: 00:27:28.625 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTkyZDYyMjc2MTA2MDUyYjg0ZTMxNDJmMWI3OTIyNzFneObN: 00:27:28.625 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:28.625 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:28.625 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGI2ZWYzN2RjM2E3ODFjMWJkYmE1OTYyZDU4ZTM0NjFhYTkyMjE3MWZhNjUwODgwL6EEtQ==: 00:27:28.625 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTkyZDYyMjc2MTA2MDUyYjg0ZTMxNDJmMWI3OTIyNzFneObN: ]] 00:27:28.625 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTkyZDYyMjc2MTA2MDUyYjg0ZTMxNDJmMWI3OTIyNzFneObN: 00:27:28.625 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:28.625 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.625 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:28.625 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:28.625 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:28.625 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.625 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:28.625 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.625 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.625 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.625 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.625 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:28.625 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:28.625 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:28.625 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.625 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.625 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:28.625 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.625 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:28.626 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:28.626 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:28.626 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:28.626 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.626 11:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.626 nvme0n1 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzAxNzAxNWMwYzMxNjRiNmEyYTQ0YjFlZTlkZTU2YWFhZTAxNTliOWNkM2NlM2VkYzA5MmE5ZWE3ZTE5OWI4Y46t3gU=: 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzAxNzAxNWMwYzMxNjRiNmEyYTQ0YjFlZTlkZTU2YWFhZTAxNTliOWNkM2NlM2VkYzA5MmE5ZWE3ZTE5OWI4Y46t3gU=: 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.626 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.887 nvme0n1 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM0ZTBmZTNiMGQyYTk5NGQzZGZhYTQzMDQ0OWNjZGUYGqtY: 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzllNjcyNWUzODU5YTFlNGM1Nzc2MTUzMzYyZjdlM2E5NWEyNDBmYmMzNjliMDkzYWNmNDg1NWNiMjBhM2U5YVVf7H8=: 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM0ZTBmZTNiMGQyYTk5NGQzZGZhYTQzMDQ0OWNjZGUYGqtY: 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzllNjcyNWUzODU5YTFlNGM1Nzc2MTUzMzYyZjdlM2E5NWEyNDBmYmMzNjliMDkzYWNmNDg1NWNiMjBhM2U5YVVf7H8=: ]] 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzllNjcyNWUzODU5YTFlNGM1Nzc2MTUzMzYyZjdlM2E5NWEyNDBmYmMzNjliMDkzYWNmNDg1NWNiMjBhM2U5YVVf7H8=: 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.887 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.148 nvme0n1 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDE4MjZmNjVkNTdiNjQxYmQwNDBmY2IxYTg2YWQwOTFkZjY3YTdlMThiNjIyZTBjKjJcbQ==: 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDE4MjZmNjVkNTdiNjQxYmQwNDBmY2IxYTg2YWQwOTFkZjY3YTdlMThiNjIyZTBjKjJcbQ==: 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: ]] 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.148 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.409 nvme0n1 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGYyZTRiNDhlNzkyYjYwNTkxYWJjZmRmMGFlODQ0OWFxUEcB: 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODdmOGFiMWNhZGUxNjVmZmRkYzc4YmYwMGIyYzkyNjCpWZHn: 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGYyZTRiNDhlNzkyYjYwNTkxYWJjZmRmMGFlODQ0OWFxUEcB: 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODdmOGFiMWNhZGUxNjVmZmRkYzc4YmYwMGIyYzkyNjCpWZHn: ]] 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODdmOGFiMWNhZGUxNjVmZmRkYzc4YmYwMGIyYzkyNjCpWZHn: 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.409 11:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.670 nvme0n1 00:27:29.670 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.670 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.670 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.670 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.670 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.670 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.670 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.670 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.670 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.670 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.930 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.930 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.930 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:29.930 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.930 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:29.930 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:29.930 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:29.930 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGI2ZWYzN2RjM2E3ODFjMWJkYmE1OTYyZDU4ZTM0NjFhYTkyMjE3MWZhNjUwODgwL6EEtQ==: 00:27:29.930 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTkyZDYyMjc2MTA2MDUyYjg0ZTMxNDJmMWI3OTIyNzFneObN: 00:27:29.930 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:29.930 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:29.930 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGI2ZWYzN2RjM2E3ODFjMWJkYmE1OTYyZDU4ZTM0NjFhYTkyMjE3MWZhNjUwODgwL6EEtQ==: 00:27:29.930 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTkyZDYyMjc2MTA2MDUyYjg0ZTMxNDJmMWI3OTIyNzFneObN: ]] 00:27:29.930 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTkyZDYyMjc2MTA2MDUyYjg0ZTMxNDJmMWI3OTIyNzFneObN: 00:27:29.930 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:29.930 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.930 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:29.930 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:29.930 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:29.930 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.930 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:29.930 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.930 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.930 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.930 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.930 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.930 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.930 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.930 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.930 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.930 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.930 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.930 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.930 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.930 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.930 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:29.930 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.931 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.931 nvme0n1 00:27:29.931 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.931 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.931 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.931 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.931 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.931 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.191 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.191 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.191 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.191 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.191 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.191 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.191 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:30.191 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.191 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.191 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:30.191 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:30.191 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzAxNzAxNWMwYzMxNjRiNmEyYTQ0YjFlZTlkZTU2YWFhZTAxNTliOWNkM2NlM2VkYzA5MmE5ZWE3ZTE5OWI4Y46t3gU=: 00:27:30.191 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:30.191 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.191 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:30.191 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzAxNzAxNWMwYzMxNjRiNmEyYTQ0YjFlZTlkZTU2YWFhZTAxNTliOWNkM2NlM2VkYzA5MmE5ZWE3ZTE5OWI4Y46t3gU=: 00:27:30.191 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:30.191 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:30.191 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.191 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.191 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:30.191 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:30.191 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.191 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:30.191 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.191 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.191 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.191 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.191 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.191 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.191 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.191 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.191 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.191 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.191 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.191 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.191 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.191 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.191 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:30.191 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.191 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.452 nvme0n1 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM0ZTBmZTNiMGQyYTk5NGQzZGZhYTQzMDQ0OWNjZGUYGqtY: 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzllNjcyNWUzODU5YTFlNGM1Nzc2MTUzMzYyZjdlM2E5NWEyNDBmYmMzNjliMDkzYWNmNDg1NWNiMjBhM2U5YVVf7H8=: 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM0ZTBmZTNiMGQyYTk5NGQzZGZhYTQzMDQ0OWNjZGUYGqtY: 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzllNjcyNWUzODU5YTFlNGM1Nzc2MTUzMzYyZjdlM2E5NWEyNDBmYmMzNjliMDkzYWNmNDg1NWNiMjBhM2U5YVVf7H8=: ]] 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzllNjcyNWUzODU5YTFlNGM1Nzc2MTUzMzYyZjdlM2E5NWEyNDBmYmMzNjliMDkzYWNmNDg1NWNiMjBhM2U5YVVf7H8=: 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.452 11:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.713 nvme0n1 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDE4MjZmNjVkNTdiNjQxYmQwNDBmY2IxYTg2YWQwOTFkZjY3YTdlMThiNjIyZTBjKjJcbQ==: 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDE4MjZmNjVkNTdiNjQxYmQwNDBmY2IxYTg2YWQwOTFkZjY3YTdlMThiNjIyZTBjKjJcbQ==: 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: ]] 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.713 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.283 nvme0n1 00:27:31.283 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.283 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.283 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.283 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.283 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.283 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.283 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.283 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.283 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.283 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.283 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.283 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.283 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:31.283 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.283 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:31.283 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:31.283 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:31.283 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGYyZTRiNDhlNzkyYjYwNTkxYWJjZmRmMGFlODQ0OWFxUEcB: 00:27:31.283 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODdmOGFiMWNhZGUxNjVmZmRkYzc4YmYwMGIyYzkyNjCpWZHn: 00:27:31.283 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.283 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:31.283 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGYyZTRiNDhlNzkyYjYwNTkxYWJjZmRmMGFlODQ0OWFxUEcB: 00:27:31.283 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODdmOGFiMWNhZGUxNjVmZmRkYzc4YmYwMGIyYzkyNjCpWZHn: ]] 00:27:31.283 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODdmOGFiMWNhZGUxNjVmZmRkYzc4YmYwMGIyYzkyNjCpWZHn: 00:27:31.283 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:31.283 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.283 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:31.283 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:31.283 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:31.283 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.283 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:31.283 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.283 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.283 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.283 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.283 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.283 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.283 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.284 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.284 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.284 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.284 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.284 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.284 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.284 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.284 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:31.284 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.284 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.544 nvme0n1 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGI2ZWYzN2RjM2E3ODFjMWJkYmE1OTYyZDU4ZTM0NjFhYTkyMjE3MWZhNjUwODgwL6EEtQ==: 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTkyZDYyMjc2MTA2MDUyYjg0ZTMxNDJmMWI3OTIyNzFneObN: 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGI2ZWYzN2RjM2E3ODFjMWJkYmE1OTYyZDU4ZTM0NjFhYTkyMjE3MWZhNjUwODgwL6EEtQ==: 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTkyZDYyMjc2MTA2MDUyYjg0ZTMxNDJmMWI3OTIyNzFneObN: ]] 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTkyZDYyMjc2MTA2MDUyYjg0ZTMxNDJmMWI3OTIyNzFneObN: 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.544 11:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.115 nvme0n1 00:27:32.115 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.115 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.115 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.115 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.115 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.115 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.115 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.115 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.115 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.115 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.115 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.115 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.115 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:32.115 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.115 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.115 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:32.115 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:32.115 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzAxNzAxNWMwYzMxNjRiNmEyYTQ0YjFlZTlkZTU2YWFhZTAxNTliOWNkM2NlM2VkYzA5MmE5ZWE3ZTE5OWI4Y46t3gU=: 00:27:32.115 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:32.115 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.115 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:32.115 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzAxNzAxNWMwYzMxNjRiNmEyYTQ0YjFlZTlkZTU2YWFhZTAxNTliOWNkM2NlM2VkYzA5MmE5ZWE3ZTE5OWI4Y46t3gU=: 00:27:32.115 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:32.115 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:32.115 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.115 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.115 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:32.116 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:32.116 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.116 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:32.116 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.116 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.116 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.116 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.116 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.116 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.116 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.116 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.116 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.116 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.116 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.116 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.116 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.116 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.116 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:32.116 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.116 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.376 nvme0n1 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM0ZTBmZTNiMGQyYTk5NGQzZGZhYTQzMDQ0OWNjZGUYGqtY: 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzllNjcyNWUzODU5YTFlNGM1Nzc2MTUzMzYyZjdlM2E5NWEyNDBmYmMzNjliMDkzYWNmNDg1NWNiMjBhM2U5YVVf7H8=: 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM0ZTBmZTNiMGQyYTk5NGQzZGZhYTQzMDQ0OWNjZGUYGqtY: 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzllNjcyNWUzODU5YTFlNGM1Nzc2MTUzMzYyZjdlM2E5NWEyNDBmYmMzNjliMDkzYWNmNDg1NWNiMjBhM2U5YVVf7H8=: ]] 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzllNjcyNWUzODU5YTFlNGM1Nzc2MTUzMzYyZjdlM2E5NWEyNDBmYmMzNjliMDkzYWNmNDg1NWNiMjBhM2U5YVVf7H8=: 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.376 11:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.946 nvme0n1 00:27:32.946 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.946 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.946 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.946 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.946 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.946 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.946 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.946 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.946 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.946 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.207 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.207 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.207 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:33.207 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.207 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:33.207 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:33.207 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:33.207 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDE4MjZmNjVkNTdiNjQxYmQwNDBmY2IxYTg2YWQwOTFkZjY3YTdlMThiNjIyZTBjKjJcbQ==: 00:27:33.207 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: 00:27:33.207 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:33.207 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:33.207 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDE4MjZmNjVkNTdiNjQxYmQwNDBmY2IxYTg2YWQwOTFkZjY3YTdlMThiNjIyZTBjKjJcbQ==: 00:27:33.207 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: ]] 00:27:33.207 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: 00:27:33.207 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:33.207 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.207 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:33.207 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:33.207 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:33.207 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.207 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:33.207 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.207 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.207 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.207 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.207 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:33.207 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:33.207 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:33.207 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.207 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.207 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:33.207 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.207 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:33.207 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:33.207 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:33.207 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:33.207 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.207 11:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.777 nvme0n1 00:27:33.777 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.777 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.777 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.777 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.777 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.777 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.777 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.777 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.777 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.777 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.777 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.777 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.777 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:33.777 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.777 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:33.777 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:33.777 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:33.777 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGYyZTRiNDhlNzkyYjYwNTkxYWJjZmRmMGFlODQ0OWFxUEcB: 00:27:33.777 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODdmOGFiMWNhZGUxNjVmZmRkYzc4YmYwMGIyYzkyNjCpWZHn: 00:27:33.777 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:33.777 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:33.777 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGYyZTRiNDhlNzkyYjYwNTkxYWJjZmRmMGFlODQ0OWFxUEcB: 00:27:33.777 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODdmOGFiMWNhZGUxNjVmZmRkYzc4YmYwMGIyYzkyNjCpWZHn: ]] 00:27:33.778 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODdmOGFiMWNhZGUxNjVmZmRkYzc4YmYwMGIyYzkyNjCpWZHn: 00:27:33.778 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:33.778 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.778 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:33.778 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:33.778 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:33.778 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.778 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:33.778 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.778 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.778 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.778 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.778 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:33.778 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:33.778 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:33.778 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.778 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.778 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:33.778 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.778 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:33.778 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:33.778 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:33.778 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:33.778 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.778 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.346 nvme0n1 00:27:34.346 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.346 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.346 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.346 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.346 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.346 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.346 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.346 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.346 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.346 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.346 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.346 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.346 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:34.346 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.346 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:34.346 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:34.346 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:34.346 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGI2ZWYzN2RjM2E3ODFjMWJkYmE1OTYyZDU4ZTM0NjFhYTkyMjE3MWZhNjUwODgwL6EEtQ==: 00:27:34.346 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTkyZDYyMjc2MTA2MDUyYjg0ZTMxNDJmMWI3OTIyNzFneObN: 00:27:34.346 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:34.346 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:34.346 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGI2ZWYzN2RjM2E3ODFjMWJkYmE1OTYyZDU4ZTM0NjFhYTkyMjE3MWZhNjUwODgwL6EEtQ==: 00:27:34.346 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTkyZDYyMjc2MTA2MDUyYjg0ZTMxNDJmMWI3OTIyNzFneObN: ]] 00:27:34.346 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTkyZDYyMjc2MTA2MDUyYjg0ZTMxNDJmMWI3OTIyNzFneObN: 00:27:34.346 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:34.346 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.346 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:34.346 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:34.346 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:34.346 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.346 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:34.346 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.346 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.346 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.346 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.346 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.346 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.346 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.346 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.346 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.347 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.347 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.347 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.347 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.347 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.347 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:34.347 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.347 11:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.917 nvme0n1 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzAxNzAxNWMwYzMxNjRiNmEyYTQ0YjFlZTlkZTU2YWFhZTAxNTliOWNkM2NlM2VkYzA5MmE5ZWE3ZTE5OWI4Y46t3gU=: 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzAxNzAxNWMwYzMxNjRiNmEyYTQ0YjFlZTlkZTU2YWFhZTAxNTliOWNkM2NlM2VkYzA5MmE5ZWE3ZTE5OWI4Y46t3gU=: 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.917 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.491 nvme0n1 00:27:35.491 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.491 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.491 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.491 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.491 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.491 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.491 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.491 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.491 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.491 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.491 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.491 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:35.491 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:35.491 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.491 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:35.491 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.491 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:35.491 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:35.491 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:35.491 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM0ZTBmZTNiMGQyYTk5NGQzZGZhYTQzMDQ0OWNjZGUYGqtY: 00:27:35.491 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzllNjcyNWUzODU5YTFlNGM1Nzc2MTUzMzYyZjdlM2E5NWEyNDBmYmMzNjliMDkzYWNmNDg1NWNiMjBhM2U5YVVf7H8=: 00:27:35.491 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:35.491 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:35.491 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM0ZTBmZTNiMGQyYTk5NGQzZGZhYTQzMDQ0OWNjZGUYGqtY: 00:27:35.491 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzllNjcyNWUzODU5YTFlNGM1Nzc2MTUzMzYyZjdlM2E5NWEyNDBmYmMzNjliMDkzYWNmNDg1NWNiMjBhM2U5YVVf7H8=: ]] 00:27:35.491 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzllNjcyNWUzODU5YTFlNGM1Nzc2MTUzMzYyZjdlM2E5NWEyNDBmYmMzNjliMDkzYWNmNDg1NWNiMjBhM2U5YVVf7H8=: 00:27:35.491 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:35.491 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.492 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:35.492 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:35.492 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:35.492 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.492 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:35.492 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.492 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.492 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.492 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.492 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:35.492 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:35.492 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:35.492 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.492 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.492 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:35.492 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.492 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:35.492 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:35.492 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:35.492 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:35.492 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.492 11:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.752 nvme0n1 00:27:35.752 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.752 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.752 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.752 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.752 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.752 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.752 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.753 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.753 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.753 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.753 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.753 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.753 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:35.753 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.753 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:35.753 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:35.753 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:35.753 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDE4MjZmNjVkNTdiNjQxYmQwNDBmY2IxYTg2YWQwOTFkZjY3YTdlMThiNjIyZTBjKjJcbQ==: 00:27:35.753 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: 00:27:35.753 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:35.753 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:35.753 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDE4MjZmNjVkNTdiNjQxYmQwNDBmY2IxYTg2YWQwOTFkZjY3YTdlMThiNjIyZTBjKjJcbQ==: 00:27:35.753 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: ]] 00:27:35.753 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: 00:27:35.753 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:35.753 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.753 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:35.753 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:35.753 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:35.753 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.753 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:35.753 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.753 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.753 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.753 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.753 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:35.753 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:35.753 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:35.753 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.753 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.753 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:35.753 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.753 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:35.753 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:35.753 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:35.753 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:35.753 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.753 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.014 nvme0n1 00:27:36.014 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.014 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.014 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.014 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.014 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.014 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.014 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.014 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.014 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.014 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.014 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.015 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.015 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:36.015 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.015 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:36.015 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:36.015 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:36.015 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGYyZTRiNDhlNzkyYjYwNTkxYWJjZmRmMGFlODQ0OWFxUEcB: 00:27:36.015 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODdmOGFiMWNhZGUxNjVmZmRkYzc4YmYwMGIyYzkyNjCpWZHn: 00:27:36.015 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:36.015 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:36.015 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGYyZTRiNDhlNzkyYjYwNTkxYWJjZmRmMGFlODQ0OWFxUEcB: 00:27:36.015 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODdmOGFiMWNhZGUxNjVmZmRkYzc4YmYwMGIyYzkyNjCpWZHn: ]] 00:27:36.015 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODdmOGFiMWNhZGUxNjVmZmRkYzc4YmYwMGIyYzkyNjCpWZHn: 00:27:36.015 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:36.015 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.015 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:36.015 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:36.015 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:36.015 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.015 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:36.015 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.015 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.015 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.015 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.015 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:36.015 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:36.015 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:36.015 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.015 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.015 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:36.015 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.015 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:36.015 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:36.015 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:36.015 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:36.015 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.015 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.015 nvme0n1 00:27:36.015 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.015 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.015 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.015 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.015 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.015 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGI2ZWYzN2RjM2E3ODFjMWJkYmE1OTYyZDU4ZTM0NjFhYTkyMjE3MWZhNjUwODgwL6EEtQ==: 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTkyZDYyMjc2MTA2MDUyYjg0ZTMxNDJmMWI3OTIyNzFneObN: 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGI2ZWYzN2RjM2E3ODFjMWJkYmE1OTYyZDU4ZTM0NjFhYTkyMjE3MWZhNjUwODgwL6EEtQ==: 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTkyZDYyMjc2MTA2MDUyYjg0ZTMxNDJmMWI3OTIyNzFneObN: ]] 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTkyZDYyMjc2MTA2MDUyYjg0ZTMxNDJmMWI3OTIyNzFneObN: 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.277 nvme0n1 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzAxNzAxNWMwYzMxNjRiNmEyYTQ0YjFlZTlkZTU2YWFhZTAxNTliOWNkM2NlM2VkYzA5MmE5ZWE3ZTE5OWI4Y46t3gU=: 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzAxNzAxNWMwYzMxNjRiNmEyYTQ0YjFlZTlkZTU2YWFhZTAxNTliOWNkM2NlM2VkYzA5MmE5ZWE3ZTE5OWI4Y46t3gU=: 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:36.277 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.278 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:36.278 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:36.278 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:36.278 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:36.278 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.278 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.539 nvme0n1 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM0ZTBmZTNiMGQyYTk5NGQzZGZhYTQzMDQ0OWNjZGUYGqtY: 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzllNjcyNWUzODU5YTFlNGM1Nzc2MTUzMzYyZjdlM2E5NWEyNDBmYmMzNjliMDkzYWNmNDg1NWNiMjBhM2U5YVVf7H8=: 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM0ZTBmZTNiMGQyYTk5NGQzZGZhYTQzMDQ0OWNjZGUYGqtY: 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzllNjcyNWUzODU5YTFlNGM1Nzc2MTUzMzYyZjdlM2E5NWEyNDBmYmMzNjliMDkzYWNmNDg1NWNiMjBhM2U5YVVf7H8=: ]] 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzllNjcyNWUzODU5YTFlNGM1Nzc2MTUzMzYyZjdlM2E5NWEyNDBmYmMzNjliMDkzYWNmNDg1NWNiMjBhM2U5YVVf7H8=: 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.539 11:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.800 nvme0n1 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDE4MjZmNjVkNTdiNjQxYmQwNDBmY2IxYTg2YWQwOTFkZjY3YTdlMThiNjIyZTBjKjJcbQ==: 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDE4MjZmNjVkNTdiNjQxYmQwNDBmY2IxYTg2YWQwOTFkZjY3YTdlMThiNjIyZTBjKjJcbQ==: 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: ]] 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.800 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.062 nvme0n1 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGYyZTRiNDhlNzkyYjYwNTkxYWJjZmRmMGFlODQ0OWFxUEcB: 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODdmOGFiMWNhZGUxNjVmZmRkYzc4YmYwMGIyYzkyNjCpWZHn: 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGYyZTRiNDhlNzkyYjYwNTkxYWJjZmRmMGFlODQ0OWFxUEcB: 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODdmOGFiMWNhZGUxNjVmZmRkYzc4YmYwMGIyYzkyNjCpWZHn: ]] 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODdmOGFiMWNhZGUxNjVmZmRkYzc4YmYwMGIyYzkyNjCpWZHn: 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.062 nvme0n1 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.062 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGI2ZWYzN2RjM2E3ODFjMWJkYmE1OTYyZDU4ZTM0NjFhYTkyMjE3MWZhNjUwODgwL6EEtQ==: 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTkyZDYyMjc2MTA2MDUyYjg0ZTMxNDJmMWI3OTIyNzFneObN: 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGI2ZWYzN2RjM2E3ODFjMWJkYmE1OTYyZDU4ZTM0NjFhYTkyMjE3MWZhNjUwODgwL6EEtQ==: 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTkyZDYyMjc2MTA2MDUyYjg0ZTMxNDJmMWI3OTIyNzFneObN: ]] 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTkyZDYyMjc2MTA2MDUyYjg0ZTMxNDJmMWI3OTIyNzFneObN: 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.324 nvme0n1 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzAxNzAxNWMwYzMxNjRiNmEyYTQ0YjFlZTlkZTU2YWFhZTAxNTliOWNkM2NlM2VkYzA5MmE5ZWE3ZTE5OWI4Y46t3gU=: 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzAxNzAxNWMwYzMxNjRiNmEyYTQ0YjFlZTlkZTU2YWFhZTAxNTliOWNkM2NlM2VkYzA5MmE5ZWE3ZTE5OWI4Y46t3gU=: 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:37.324 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.325 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.325 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.325 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.325 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:37.325 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:37.325 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:37.325 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.325 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.325 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:37.325 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.325 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:37.325 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:37.325 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:37.325 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:37.325 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.325 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.586 nvme0n1 00:27:37.586 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.586 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.586 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.586 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.586 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.586 11:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.586 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.586 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.586 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.586 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.586 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.586 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:37.586 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.586 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:37.586 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.586 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:37.586 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:37.586 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:37.586 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM0ZTBmZTNiMGQyYTk5NGQzZGZhYTQzMDQ0OWNjZGUYGqtY: 00:27:37.586 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzllNjcyNWUzODU5YTFlNGM1Nzc2MTUzMzYyZjdlM2E5NWEyNDBmYmMzNjliMDkzYWNmNDg1NWNiMjBhM2U5YVVf7H8=: 00:27:37.586 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:37.586 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:37.586 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM0ZTBmZTNiMGQyYTk5NGQzZGZhYTQzMDQ0OWNjZGUYGqtY: 00:27:37.586 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzllNjcyNWUzODU5YTFlNGM1Nzc2MTUzMzYyZjdlM2E5NWEyNDBmYmMzNjliMDkzYWNmNDg1NWNiMjBhM2U5YVVf7H8=: ]] 00:27:37.586 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzllNjcyNWUzODU5YTFlNGM1Nzc2MTUzMzYyZjdlM2E5NWEyNDBmYmMzNjliMDkzYWNmNDg1NWNiMjBhM2U5YVVf7H8=: 00:27:37.586 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:37.586 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.586 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:37.586 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:37.586 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:37.586 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.586 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:37.586 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.586 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.586 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.586 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.586 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:37.586 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:37.586 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:37.586 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.586 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.586 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:37.586 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.586 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:37.586 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:37.586 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:37.586 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:37.586 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.586 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.847 nvme0n1 00:27:37.847 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.847 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.847 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.847 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.847 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.847 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.847 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.847 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.847 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.847 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.847 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.847 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.847 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:37.847 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.847 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:37.847 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:37.847 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:37.847 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDE4MjZmNjVkNTdiNjQxYmQwNDBmY2IxYTg2YWQwOTFkZjY3YTdlMThiNjIyZTBjKjJcbQ==: 00:27:37.847 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: 00:27:37.847 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:37.847 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:37.847 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDE4MjZmNjVkNTdiNjQxYmQwNDBmY2IxYTg2YWQwOTFkZjY3YTdlMThiNjIyZTBjKjJcbQ==: 00:27:37.847 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: ]] 00:27:37.847 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: 00:27:37.847 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:37.847 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.847 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:37.847 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:37.848 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:37.848 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.848 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:37.848 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.848 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.848 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.848 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.848 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:37.848 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:37.848 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:37.848 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.848 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.848 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:37.848 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.848 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:37.848 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:37.848 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:37.848 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:37.848 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.848 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.109 nvme0n1 00:27:38.109 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.109 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.109 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.109 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.109 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.109 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.109 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.109 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.109 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.109 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.109 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.109 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.109 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:38.109 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.109 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:38.109 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:38.109 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:38.109 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGYyZTRiNDhlNzkyYjYwNTkxYWJjZmRmMGFlODQ0OWFxUEcB: 00:27:38.109 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODdmOGFiMWNhZGUxNjVmZmRkYzc4YmYwMGIyYzkyNjCpWZHn: 00:27:38.109 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:38.109 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:38.109 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGYyZTRiNDhlNzkyYjYwNTkxYWJjZmRmMGFlODQ0OWFxUEcB: 00:27:38.109 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODdmOGFiMWNhZGUxNjVmZmRkYzc4YmYwMGIyYzkyNjCpWZHn: ]] 00:27:38.109 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODdmOGFiMWNhZGUxNjVmZmRkYzc4YmYwMGIyYzkyNjCpWZHn: 00:27:38.109 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:38.109 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.109 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:38.109 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:38.109 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:38.109 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.109 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:38.109 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.109 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.109 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.109 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.109 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:38.109 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:38.370 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:38.370 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.370 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.370 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:38.370 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.370 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:38.370 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:38.370 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:38.370 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:38.370 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.370 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.370 nvme0n1 00:27:38.370 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.370 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.370 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.370 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.370 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.370 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.632 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.632 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.632 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.632 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.632 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.632 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.632 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:38.632 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.632 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:38.632 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:38.632 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:38.632 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGI2ZWYzN2RjM2E3ODFjMWJkYmE1OTYyZDU4ZTM0NjFhYTkyMjE3MWZhNjUwODgwL6EEtQ==: 00:27:38.632 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTkyZDYyMjc2MTA2MDUyYjg0ZTMxNDJmMWI3OTIyNzFneObN: 00:27:38.632 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:38.632 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:38.632 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGI2ZWYzN2RjM2E3ODFjMWJkYmE1OTYyZDU4ZTM0NjFhYTkyMjE3MWZhNjUwODgwL6EEtQ==: 00:27:38.632 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTkyZDYyMjc2MTA2MDUyYjg0ZTMxNDJmMWI3OTIyNzFneObN: ]] 00:27:38.632 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTkyZDYyMjc2MTA2MDUyYjg0ZTMxNDJmMWI3OTIyNzFneObN: 00:27:38.632 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:38.632 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.632 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:38.632 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:38.632 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:38.632 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.632 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:38.632 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.632 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.632 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.632 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.632 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:38.632 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:38.632 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:38.632 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.632 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.632 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:38.632 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.632 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:38.632 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:38.632 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:38.632 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:38.632 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.632 11:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.894 nvme0n1 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzAxNzAxNWMwYzMxNjRiNmEyYTQ0YjFlZTlkZTU2YWFhZTAxNTliOWNkM2NlM2VkYzA5MmE5ZWE3ZTE5OWI4Y46t3gU=: 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzAxNzAxNWMwYzMxNjRiNmEyYTQ0YjFlZTlkZTU2YWFhZTAxNTliOWNkM2NlM2VkYzA5MmE5ZWE3ZTE5OWI4Y46t3gU=: 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.894 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.155 nvme0n1 00:27:39.155 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.155 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.155 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.155 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.155 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.155 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.155 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.155 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.155 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.155 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.156 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.156 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:39.156 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.156 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:39.156 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.156 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:39.156 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:39.156 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:39.156 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM0ZTBmZTNiMGQyYTk5NGQzZGZhYTQzMDQ0OWNjZGUYGqtY: 00:27:39.156 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzllNjcyNWUzODU5YTFlNGM1Nzc2MTUzMzYyZjdlM2E5NWEyNDBmYmMzNjliMDkzYWNmNDg1NWNiMjBhM2U5YVVf7H8=: 00:27:39.156 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:39.156 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:39.156 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM0ZTBmZTNiMGQyYTk5NGQzZGZhYTQzMDQ0OWNjZGUYGqtY: 00:27:39.156 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzllNjcyNWUzODU5YTFlNGM1Nzc2MTUzMzYyZjdlM2E5NWEyNDBmYmMzNjliMDkzYWNmNDg1NWNiMjBhM2U5YVVf7H8=: ]] 00:27:39.156 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzllNjcyNWUzODU5YTFlNGM1Nzc2MTUzMzYyZjdlM2E5NWEyNDBmYmMzNjliMDkzYWNmNDg1NWNiMjBhM2U5YVVf7H8=: 00:27:39.156 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:39.156 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.156 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:39.156 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:39.156 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:39.156 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.156 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:39.156 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.156 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.156 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.156 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.156 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:39.156 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:39.156 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:39.156 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.156 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.156 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:39.156 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.156 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:39.156 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:39.156 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:39.156 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:39.156 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.156 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.417 nvme0n1 00:27:39.417 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.417 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.417 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.417 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.417 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.417 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.417 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.417 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.417 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.417 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.677 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.678 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.678 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:39.678 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.678 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:39.678 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:39.678 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:39.678 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDE4MjZmNjVkNTdiNjQxYmQwNDBmY2IxYTg2YWQwOTFkZjY3YTdlMThiNjIyZTBjKjJcbQ==: 00:27:39.678 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: 00:27:39.678 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:39.678 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:39.678 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDE4MjZmNjVkNTdiNjQxYmQwNDBmY2IxYTg2YWQwOTFkZjY3YTdlMThiNjIyZTBjKjJcbQ==: 00:27:39.678 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: ]] 00:27:39.678 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: 00:27:39.678 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:39.678 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.678 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:39.678 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:39.678 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:39.678 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.678 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:39.678 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.678 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.678 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.678 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.678 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:39.678 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:39.678 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:39.678 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.678 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.678 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:39.678 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.678 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:39.678 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:39.678 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:39.678 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:39.678 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.678 11:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.941 nvme0n1 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGYyZTRiNDhlNzkyYjYwNTkxYWJjZmRmMGFlODQ0OWFxUEcB: 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODdmOGFiMWNhZGUxNjVmZmRkYzc4YmYwMGIyYzkyNjCpWZHn: 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGYyZTRiNDhlNzkyYjYwNTkxYWJjZmRmMGFlODQ0OWFxUEcB: 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODdmOGFiMWNhZGUxNjVmZmRkYzc4YmYwMGIyYzkyNjCpWZHn: ]] 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODdmOGFiMWNhZGUxNjVmZmRkYzc4YmYwMGIyYzkyNjCpWZHn: 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.941 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.513 nvme0n1 00:27:40.513 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.513 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.513 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.513 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.513 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.513 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.513 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.513 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.513 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.513 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.513 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.513 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.513 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:40.513 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.513 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:40.513 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:40.513 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:40.513 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGI2ZWYzN2RjM2E3ODFjMWJkYmE1OTYyZDU4ZTM0NjFhYTkyMjE3MWZhNjUwODgwL6EEtQ==: 00:27:40.513 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTkyZDYyMjc2MTA2MDUyYjg0ZTMxNDJmMWI3OTIyNzFneObN: 00:27:40.513 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:40.513 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:40.514 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGI2ZWYzN2RjM2E3ODFjMWJkYmE1OTYyZDU4ZTM0NjFhYTkyMjE3MWZhNjUwODgwL6EEtQ==: 00:27:40.514 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTkyZDYyMjc2MTA2MDUyYjg0ZTMxNDJmMWI3OTIyNzFneObN: ]] 00:27:40.514 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTkyZDYyMjc2MTA2MDUyYjg0ZTMxNDJmMWI3OTIyNzFneObN: 00:27:40.514 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:40.514 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.514 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:40.514 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:40.514 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:40.514 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.514 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:40.514 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.514 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.514 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.514 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.514 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:40.514 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.514 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.514 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.514 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.514 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.514 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.514 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.514 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.514 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.514 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:40.514 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.514 11:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.775 nvme0n1 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzAxNzAxNWMwYzMxNjRiNmEyYTQ0YjFlZTlkZTU2YWFhZTAxNTliOWNkM2NlM2VkYzA5MmE5ZWE3ZTE5OWI4Y46t3gU=: 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzAxNzAxNWMwYzMxNjRiNmEyYTQ0YjFlZTlkZTU2YWFhZTAxNTliOWNkM2NlM2VkYzA5MmE5ZWE3ZTE5OWI4Y46t3gU=: 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.775 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.346 nvme0n1 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM0ZTBmZTNiMGQyYTk5NGQzZGZhYTQzMDQ0OWNjZGUYGqtY: 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzllNjcyNWUzODU5YTFlNGM1Nzc2MTUzMzYyZjdlM2E5NWEyNDBmYmMzNjliMDkzYWNmNDg1NWNiMjBhM2U5YVVf7H8=: 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM0ZTBmZTNiMGQyYTk5NGQzZGZhYTQzMDQ0OWNjZGUYGqtY: 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzllNjcyNWUzODU5YTFlNGM1Nzc2MTUzMzYyZjdlM2E5NWEyNDBmYmMzNjliMDkzYWNmNDg1NWNiMjBhM2U5YVVf7H8=: ]] 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzllNjcyNWUzODU5YTFlNGM1Nzc2MTUzMzYyZjdlM2E5NWEyNDBmYmMzNjliMDkzYWNmNDg1NWNiMjBhM2U5YVVf7H8=: 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.346 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.347 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:41.347 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.347 11:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.932 nvme0n1 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDE4MjZmNjVkNTdiNjQxYmQwNDBmY2IxYTg2YWQwOTFkZjY3YTdlMThiNjIyZTBjKjJcbQ==: 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDE4MjZmNjVkNTdiNjQxYmQwNDBmY2IxYTg2YWQwOTFkZjY3YTdlMThiNjIyZTBjKjJcbQ==: 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: ]] 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.932 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.501 nvme0n1 00:27:42.501 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.501 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.501 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.501 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.501 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.501 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.501 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.501 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.501 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.501 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.501 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.501 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.501 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:42.501 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.501 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.501 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:42.501 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:42.501 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGYyZTRiNDhlNzkyYjYwNTkxYWJjZmRmMGFlODQ0OWFxUEcB: 00:27:42.501 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODdmOGFiMWNhZGUxNjVmZmRkYzc4YmYwMGIyYzkyNjCpWZHn: 00:27:42.501 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.501 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:42.501 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGYyZTRiNDhlNzkyYjYwNTkxYWJjZmRmMGFlODQ0OWFxUEcB: 00:27:42.501 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODdmOGFiMWNhZGUxNjVmZmRkYzc4YmYwMGIyYzkyNjCpWZHn: ]] 00:27:42.501 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODdmOGFiMWNhZGUxNjVmZmRkYzc4YmYwMGIyYzkyNjCpWZHn: 00:27:42.501 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:42.501 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.501 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:42.501 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:42.501 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:42.501 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.501 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:42.501 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.501 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.501 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.501 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.501 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:42.501 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:42.501 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:42.501 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.501 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.501 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:42.501 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.502 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:42.502 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:42.502 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:42.502 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:42.502 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.502 11:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.072 nvme0n1 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGI2ZWYzN2RjM2E3ODFjMWJkYmE1OTYyZDU4ZTM0NjFhYTkyMjE3MWZhNjUwODgwL6EEtQ==: 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTkyZDYyMjc2MTA2MDUyYjg0ZTMxNDJmMWI3OTIyNzFneObN: 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGI2ZWYzN2RjM2E3ODFjMWJkYmE1OTYyZDU4ZTM0NjFhYTkyMjE3MWZhNjUwODgwL6EEtQ==: 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTkyZDYyMjc2MTA2MDUyYjg0ZTMxNDJmMWI3OTIyNzFneObN: ]] 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTkyZDYyMjc2MTA2MDUyYjg0ZTMxNDJmMWI3OTIyNzFneObN: 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.072 11:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.642 nvme0n1 00:27:43.642 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.642 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.642 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.642 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.642 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.642 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.642 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.642 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.642 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.642 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.642 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.643 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.643 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:43.643 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.643 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:43.643 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:43.643 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:43.643 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzAxNzAxNWMwYzMxNjRiNmEyYTQ0YjFlZTlkZTU2YWFhZTAxNTliOWNkM2NlM2VkYzA5MmE5ZWE3ZTE5OWI4Y46t3gU=: 00:27:43.643 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:43.643 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:43.643 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:43.643 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzAxNzAxNWMwYzMxNjRiNmEyYTQ0YjFlZTlkZTU2YWFhZTAxNTliOWNkM2NlM2VkYzA5MmE5ZWE3ZTE5OWI4Y46t3gU=: 00:27:43.643 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:43.643 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:43.643 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.643 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:43.643 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:43.643 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:43.643 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.643 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:43.643 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.643 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.643 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.643 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.643 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.643 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.643 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.643 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.643 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.643 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.643 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.643 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.643 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.643 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.643 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:43.643 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.643 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.214 nvme0n1 00:27:44.214 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.214 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.214 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.214 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.214 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.214 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.214 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.214 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.214 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.214 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.214 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.214 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:44.214 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.214 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:44.214 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:44.214 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:44.214 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDE4MjZmNjVkNTdiNjQxYmQwNDBmY2IxYTg2YWQwOTFkZjY3YTdlMThiNjIyZTBjKjJcbQ==: 00:27:44.214 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: 00:27:44.214 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:44.214 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:44.214 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDE4MjZmNjVkNTdiNjQxYmQwNDBmY2IxYTg2YWQwOTFkZjY3YTdlMThiNjIyZTBjKjJcbQ==: 00:27:44.214 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: ]] 00:27:44.214 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWNiYTUwZGNlNjM5NTVhMGU0NTk4ZWE2NjdmMGMzZGRjOGQ4YTc5NzljMTdmODI57QX/Bg==: 00:27:44.214 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:44.214 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.214 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.475 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.476 request: 00:27:44.476 { 00:27:44.476 "name": "nvme0", 00:27:44.476 "trtype": "tcp", 00:27:44.476 "traddr": "10.0.0.1", 00:27:44.476 "adrfam": "ipv4", 00:27:44.476 "trsvcid": "4420", 00:27:44.476 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:44.476 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:44.476 "prchk_reftag": false, 00:27:44.476 "prchk_guard": false, 00:27:44.476 "hdgst": false, 00:27:44.476 "ddgst": false, 00:27:44.476 "method": "bdev_nvme_attach_controller", 00:27:44.476 "req_id": 1 00:27:44.476 } 00:27:44.476 Got JSON-RPC error response 00:27:44.476 response: 00:27:44.476 { 00:27:44.476 "code": -5, 00:27:44.476 "message": "Input/output error" 00:27:44.476 } 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.476 request: 00:27:44.476 { 00:27:44.476 "name": "nvme0", 00:27:44.476 "trtype": "tcp", 00:27:44.476 "traddr": "10.0.0.1", 00:27:44.476 "adrfam": "ipv4", 00:27:44.476 "trsvcid": "4420", 00:27:44.476 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:44.476 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:44.476 "prchk_reftag": false, 00:27:44.476 "prchk_guard": false, 00:27:44.476 "hdgst": false, 00:27:44.476 "ddgst": false, 00:27:44.476 "dhchap_key": "key2", 00:27:44.476 "method": "bdev_nvme_attach_controller", 00:27:44.476 "req_id": 1 00:27:44.476 } 00:27:44.476 Got JSON-RPC error response 00:27:44.476 response: 00:27:44.476 { 00:27:44.476 "code": -5, 00:27:44.476 "message": "Input/output error" 00:27:44.476 } 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:44.476 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.477 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.477 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.477 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:44.477 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:44.477 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:44.477 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:44.477 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:44.477 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.477 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.477 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:44.477 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.477 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:44.477 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:44.477 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:44.477 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:44.477 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:44.477 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:44.477 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:44.477 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:44.477 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:44.477 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:44.477 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:44.477 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.477 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.477 request: 00:27:44.477 { 00:27:44.477 "name": "nvme0", 00:27:44.477 "trtype": "tcp", 00:27:44.477 "traddr": "10.0.0.1", 00:27:44.477 "adrfam": "ipv4", 00:27:44.477 "trsvcid": "4420", 00:27:44.477 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:44.477 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:44.477 "prchk_reftag": false, 00:27:44.477 "prchk_guard": false, 00:27:44.477 "hdgst": false, 00:27:44.477 "ddgst": false, 00:27:44.477 "dhchap_key": "key1", 00:27:44.477 "dhchap_ctrlr_key": "ckey2", 00:27:44.477 "method": "bdev_nvme_attach_controller", 00:27:44.477 "req_id": 1 00:27:44.477 } 00:27:44.477 Got JSON-RPC error response 00:27:44.477 response: 00:27:44.477 { 00:27:44.477 "code": -5, 00:27:44.477 "message": "Input/output error" 00:27:44.477 } 00:27:44.477 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:44.477 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:44.477 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:44.477 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:44.477 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:44.477 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:27:44.477 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:27:44.477 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:44.477 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:44.477 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:27:44.477 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:44.477 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:27:44.477 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:44.477 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:44.477 rmmod nvme_tcp 00:27:44.477 rmmod nvme_fabrics 00:27:44.738 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:44.738 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:27:44.738 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:27:44.738 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1583727 ']' 00:27:44.738 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1583727 00:27:44.738 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 1583727 ']' 00:27:44.738 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 1583727 00:27:44.738 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:27:44.738 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:44.738 11:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1583727 00:27:44.738 11:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:44.738 11:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:44.738 11:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1583727' 00:27:44.738 killing process with pid 1583727 00:27:44.738 11:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 1583727 00:27:44.738 11:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 1583727 00:27:44.738 11:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:44.738 11:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:44.738 11:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:44.738 11:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:44.738 11:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:44.738 11:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.738 11:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:44.738 11:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:47.284 11:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:47.284 11:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:47.284 11:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:47.284 11:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:47.284 11:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:47.284 11:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:27:47.284 11:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:47.284 11:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:47.284 11:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:47.284 11:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:47.284 11:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:47.284 11:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:47.284 11:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:49.825 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:49.825 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:49.825 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:49.825 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:49.825 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:49.825 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:49.825 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:49.825 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:49.825 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:49.825 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:49.825 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:49.825 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:49.825 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:49.825 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:49.825 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:49.825 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:50.396 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:27:50.655 11:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.kO8 /tmp/spdk.key-null.cVr /tmp/spdk.key-sha256.8w4 /tmp/spdk.key-sha384.Ua0 /tmp/spdk.key-sha512.mug /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:50.655 11:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:53.195 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:27:53.195 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:53.195 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:27:53.195 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:27:53.195 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:27:53.195 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:27:53.195 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:27:53.195 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:27:53.195 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:27:53.195 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:27:53.195 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:27:53.195 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:27:53.195 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:27:53.195 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:27:53.195 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:27:53.195 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:27:53.195 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:27:53.195 00:27:53.195 real 0m46.641s 00:27:53.195 user 0m41.266s 00:27:53.195 sys 0m11.287s 00:27:53.195 11:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:53.195 11:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.195 ************************************ 00:27:53.195 END TEST nvmf_auth_host 00:27:53.195 ************************************ 00:27:53.195 11:16:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:27:53.195 11:16:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:53.195 11:16:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:53.195 11:16:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:53.195 11:16:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.195 ************************************ 00:27:53.195 START TEST nvmf_digest 00:27:53.195 ************************************ 00:27:53.195 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:53.195 * Looking for test storage... 00:27:53.195 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:53.195 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:53.195 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:53.195 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:53.195 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:53.195 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:53.195 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:53.195 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:53.195 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:53.195 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:53.195 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:53.196 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:53.196 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:53.196 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:53.196 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:53.196 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:53.196 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:53.196 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:53.196 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:53.196 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:53.196 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:53.196 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:53.196 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:53.196 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.196 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.196 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.196 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:53.196 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.196 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:27:53.196 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:53.196 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:53.196 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:53.196 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:53.196 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:53.196 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:53.196 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:53.196 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:53.196 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:53.196 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:53.196 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:53.196 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:53.196 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:53.196 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:53.196 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:53.196 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:53.196 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:53.196 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:53.196 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:53.196 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:53.196 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:53.196 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:53.196 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:53.196 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:27:53.196 11:16:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:58.479 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:58.479 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:58.479 Found net devices under 0000:86:00.0: cvl_0_0 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:58.479 Found net devices under 0000:86:00.1: cvl_0_1 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:58.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:58.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:27:58.479 00:27:58.479 --- 10.0.0.2 ping statistics --- 00:27:58.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.479 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:27:58.479 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:58.479 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:58.479 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:27:58.479 00:27:58.479 --- 10.0.0.1 ping statistics --- 00:27:58.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.480 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:27:58.480 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:58.480 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:27:58.480 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:58.480 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:58.480 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:58.480 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:58.480 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:58.480 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:58.480 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:58.480 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:58.480 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:58.480 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:58.480 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:58.480 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:58.480 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:58.480 ************************************ 00:27:58.480 START TEST nvmf_digest_clean 00:27:58.480 ************************************ 00:27:58.480 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:27:58.480 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:27:58.480 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:58.480 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:27:58.480 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:58.480 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:58.480 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:58.480 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:58.480 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:58.480 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1596520 00:27:58.480 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1596520 00:27:58.480 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:58.480 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1596520 ']' 00:27:58.480 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:58.480 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:58.480 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:58.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:58.480 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:58.480 11:16:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:58.480 [2024-07-26 11:16:17.478052] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:58.480 [2024-07-26 11:16:17.478091] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:58.480 EAL: No free 2048 kB hugepages reported on node 1 00:27:58.480 [2024-07-26 11:16:17.533726] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.480 [2024-07-26 11:16:17.604541] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:58.480 [2024-07-26 11:16:17.604582] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:58.480 [2024-07-26 11:16:17.604589] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:58.480 [2024-07-26 11:16:17.604595] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:58.480 [2024-07-26 11:16:17.604599] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:58.480 [2024-07-26 11:16:17.604616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:59.050 11:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:59.050 11:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:59.050 11:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:59.050 11:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:59.050 11:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:59.050 11:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:59.050 11:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:59.050 11:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:27:59.050 11:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:27:59.050 11:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.050 11:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:59.050 null0 00:27:59.050 [2024-07-26 11:16:18.417162] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:59.050 [2024-07-26 11:16:18.441345] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:59.050 11:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.050 11:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:59.050 11:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:59.050 11:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:59.050 11:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:59.050 11:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:59.050 11:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:59.050 11:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:59.050 11:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1596579 00:27:59.050 11:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1596579 /var/tmp/bperf.sock 00:27:59.050 11:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:59.050 11:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1596579 ']' 00:27:59.050 11:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:59.050 11:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:59.050 11:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:59.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:59.050 11:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:59.050 11:16:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:59.050 [2024-07-26 11:16:18.494674] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:59.050 [2024-07-26 11:16:18.494717] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1596579 ] 00:27:59.050 EAL: No free 2048 kB hugepages reported on node 1 00:27:59.311 [2024-07-26 11:16:18.549767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.311 [2024-07-26 11:16:18.631034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:59.881 11:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:59.881 11:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:59.881 11:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:59.881 11:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:59.881 11:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:00.141 11:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:00.141 11:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:00.402 nvme0n1 00:28:00.402 11:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:00.402 11:16:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:00.402 Running I/O for 2 seconds... 00:28:02.943 00:28:02.943 Latency(us) 00:28:02.943 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:02.943 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:02.943 nvme0n1 : 2.00 26334.82 102.87 0.00 0.00 4854.86 2778.16 29063.79 00:28:02.943 =================================================================================================================== 00:28:02.943 Total : 26334.82 102.87 0.00 0.00 4854.86 2778.16 29063.79 00:28:02.943 0 00:28:02.943 11:16:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:02.943 11:16:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:02.943 11:16:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:02.943 11:16:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:02.943 | select(.opcode=="crc32c") 00:28:02.943 | "\(.module_name) \(.executed)"' 00:28:02.943 11:16:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:02.943 11:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:02.943 11:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:02.943 11:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:02.943 11:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:02.943 11:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1596579 00:28:02.943 11:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1596579 ']' 00:28:02.943 11:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1596579 00:28:02.943 11:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:02.943 11:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:02.943 11:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1596579 00:28:02.943 11:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:02.943 11:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:02.943 11:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1596579' 00:28:02.943 killing process with pid 1596579 00:28:02.943 11:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1596579 00:28:02.943 Received shutdown signal, test time was about 2.000000 seconds 00:28:02.943 00:28:02.943 Latency(us) 00:28:02.943 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:02.943 =================================================================================================================== 00:28:02.943 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:02.943 11:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1596579 00:28:02.943 11:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:02.943 11:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:02.943 11:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:02.943 11:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:02.943 11:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:02.943 11:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:02.943 11:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:02.943 11:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1597248 00:28:02.943 11:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1597248 /var/tmp/bperf.sock 00:28:02.943 11:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:02.943 11:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1597248 ']' 00:28:02.943 11:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:02.943 11:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:02.943 11:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:02.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:02.943 11:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:02.943 11:16:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:02.943 [2024-07-26 11:16:22.352433] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:02.943 [2024-07-26 11:16:22.352481] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1597248 ] 00:28:02.943 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:02.943 Zero copy mechanism will not be used. 00:28:02.943 EAL: No free 2048 kB hugepages reported on node 1 00:28:02.943 [2024-07-26 11:16:22.406494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.204 [2024-07-26 11:16:22.477420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:03.773 11:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:03.773 11:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:03.773 11:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:03.773 11:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:03.773 11:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:04.042 11:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:04.042 11:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:04.303 nvme0n1 00:28:04.303 11:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:04.303 11:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:04.303 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:04.303 Zero copy mechanism will not be used. 00:28:04.303 Running I/O for 2 seconds... 00:28:06.272 00:28:06.272 Latency(us) 00:28:06.272 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:06.272 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:06.272 nvme0n1 : 2.01 2058.57 257.32 0.00 0.00 7770.30 7123.48 30317.52 00:28:06.272 =================================================================================================================== 00:28:06.272 Total : 2058.57 257.32 0.00 0.00 7770.30 7123.48 30317.52 00:28:06.272 0 00:28:06.272 11:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:06.272 11:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:06.272 11:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:06.272 11:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:06.272 | select(.opcode=="crc32c") 00:28:06.272 | "\(.module_name) \(.executed)"' 00:28:06.272 11:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:06.533 11:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:06.533 11:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:06.533 11:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:06.533 11:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:06.533 11:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1597248 00:28:06.533 11:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1597248 ']' 00:28:06.533 11:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1597248 00:28:06.533 11:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:06.533 11:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:06.533 11:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1597248 00:28:06.533 11:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:06.533 11:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:06.533 11:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1597248' 00:28:06.533 killing process with pid 1597248 00:28:06.533 11:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1597248 00:28:06.533 Received shutdown signal, test time was about 2.000000 seconds 00:28:06.533 00:28:06.533 Latency(us) 00:28:06.533 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:06.533 =================================================================================================================== 00:28:06.533 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:06.533 11:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1597248 00:28:06.794 11:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:06.794 11:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:06.794 11:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:06.794 11:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:06.794 11:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:06.794 11:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:06.794 11:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:06.794 11:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1597946 00:28:06.794 11:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1597946 /var/tmp/bperf.sock 00:28:06.794 11:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:06.794 11:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1597946 ']' 00:28:06.794 11:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:06.794 11:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:06.794 11:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:06.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:06.794 11:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:06.794 11:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:06.794 [2024-07-26 11:16:26.159382] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:06.794 [2024-07-26 11:16:26.159434] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1597946 ] 00:28:06.794 EAL: No free 2048 kB hugepages reported on node 1 00:28:06.794 [2024-07-26 11:16:26.212821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.794 [2024-07-26 11:16:26.286983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:07.735 11:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:07.735 11:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:07.735 11:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:07.735 11:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:07.735 11:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:07.735 11:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:07.735 11:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:08.305 nvme0n1 00:28:08.305 11:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:08.305 11:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:08.305 Running I/O for 2 seconds... 00:28:10.214 00:28:10.214 Latency(us) 00:28:10.214 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:10.214 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:10.214 nvme0n1 : 2.00 26423.00 103.21 0.00 0.00 4835.82 2664.18 34420.65 00:28:10.214 =================================================================================================================== 00:28:10.214 Total : 26423.00 103.21 0.00 0.00 4835.82 2664.18 34420.65 00:28:10.214 0 00:28:10.214 11:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:10.214 11:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:10.214 11:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:10.214 11:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:10.214 | select(.opcode=="crc32c") 00:28:10.214 | "\(.module_name) \(.executed)"' 00:28:10.214 11:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:10.474 11:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:10.474 11:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:10.474 11:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:10.474 11:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:10.474 11:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1597946 00:28:10.474 11:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1597946 ']' 00:28:10.474 11:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1597946 00:28:10.474 11:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:10.474 11:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:10.474 11:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1597946 00:28:10.474 11:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:10.474 11:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:10.474 11:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1597946' 00:28:10.474 killing process with pid 1597946 00:28:10.474 11:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1597946 00:28:10.474 Received shutdown signal, test time was about 2.000000 seconds 00:28:10.474 00:28:10.474 Latency(us) 00:28:10.474 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:10.474 =================================================================================================================== 00:28:10.474 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:10.474 11:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1597946 00:28:10.734 11:16:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:10.734 11:16:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:10.734 11:16:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:10.734 11:16:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:10.734 11:16:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:10.734 11:16:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:10.734 11:16:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:10.734 11:16:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1598582 00:28:10.734 11:16:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1598582 /var/tmp/bperf.sock 00:28:10.734 11:16:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:10.734 11:16:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1598582 ']' 00:28:10.734 11:16:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:10.734 11:16:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:10.734 11:16:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:10.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:10.734 11:16:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:10.734 11:16:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:10.734 [2024-07-26 11:16:30.124018] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:10.734 [2024-07-26 11:16:30.124072] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1598582 ] 00:28:10.734 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:10.734 Zero copy mechanism will not be used. 00:28:10.734 EAL: No free 2048 kB hugepages reported on node 1 00:28:10.734 [2024-07-26 11:16:30.177953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.994 [2024-07-26 11:16:30.260845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:11.563 11:16:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:11.563 11:16:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:11.563 11:16:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:11.563 11:16:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:11.563 11:16:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:11.823 11:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:11.823 11:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:12.083 nvme0n1 00:28:12.083 11:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:12.083 11:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:12.083 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:12.083 Zero copy mechanism will not be used. 00:28:12.083 Running I/O for 2 seconds... 00:28:14.624 00:28:14.624 Latency(us) 00:28:14.624 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:14.624 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:14.624 nvme0n1 : 2.01 1676.26 209.53 0.00 0.00 9522.61 7864.32 34876.55 00:28:14.624 =================================================================================================================== 00:28:14.624 Total : 1676.26 209.53 0.00 0.00 9522.61 7864.32 34876.55 00:28:14.624 0 00:28:14.624 11:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:14.624 11:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:14.624 11:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:14.624 11:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:14.624 11:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:14.624 | select(.opcode=="crc32c") 00:28:14.624 | "\(.module_name) \(.executed)"' 00:28:14.624 11:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:14.624 11:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:14.624 11:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:14.624 11:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:14.624 11:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1598582 00:28:14.624 11:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1598582 ']' 00:28:14.624 11:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1598582 00:28:14.624 11:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:14.624 11:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:14.624 11:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1598582 00:28:14.624 11:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:14.624 11:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:14.624 11:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1598582' 00:28:14.624 killing process with pid 1598582 00:28:14.624 11:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1598582 00:28:14.624 Received shutdown signal, test time was about 2.000000 seconds 00:28:14.624 00:28:14.624 Latency(us) 00:28:14.624 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:14.624 =================================================================================================================== 00:28:14.624 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:14.624 11:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1598582 00:28:14.624 11:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1596520 00:28:14.624 11:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1596520 ']' 00:28:14.624 11:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1596520 00:28:14.624 11:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:14.624 11:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:14.624 11:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1596520 00:28:14.624 11:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:14.624 11:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:14.624 11:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1596520' 00:28:14.624 killing process with pid 1596520 00:28:14.624 11:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1596520 00:28:14.624 11:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1596520 00:28:14.885 00:28:14.885 real 0m16.805s 00:28:14.885 user 0m33.602s 00:28:14.885 sys 0m3.155s 00:28:14.885 11:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:14.885 11:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:14.885 ************************************ 00:28:14.885 END TEST nvmf_digest_clean 00:28:14.885 ************************************ 00:28:14.885 11:16:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:14.885 11:16:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:14.885 11:16:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:14.885 11:16:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:14.885 ************************************ 00:28:14.885 START TEST nvmf_digest_error 00:28:14.885 ************************************ 00:28:14.885 11:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:28:14.885 11:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:14.885 11:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:14.885 11:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:14.885 11:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:14.885 11:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1599302 00:28:14.885 11:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1599302 00:28:14.885 11:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:14.885 11:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1599302 ']' 00:28:14.885 11:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:14.885 11:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:14.885 11:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:14.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:14.885 11:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:14.885 11:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:14.885 [2024-07-26 11:16:34.356869] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:14.885 [2024-07-26 11:16:34.356908] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:15.145 EAL: No free 2048 kB hugepages reported on node 1 00:28:15.145 [2024-07-26 11:16:34.414409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.145 [2024-07-26 11:16:34.487350] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:15.145 [2024-07-26 11:16:34.487390] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:15.145 [2024-07-26 11:16:34.487397] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:15.145 [2024-07-26 11:16:34.487403] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:15.145 [2024-07-26 11:16:34.487408] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:15.145 [2024-07-26 11:16:34.487425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:15.714 11:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:15.714 11:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:15.714 11:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:15.714 11:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:15.714 11:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:15.714 11:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:15.714 11:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:15.714 11:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.714 11:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:15.714 [2024-07-26 11:16:35.189450] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:15.714 11:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.714 11:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:15.714 11:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:15.714 11:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.714 11:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:15.975 null0 00:28:15.975 [2024-07-26 11:16:35.283542] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:15.975 [2024-07-26 11:16:35.307728] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:15.975 11:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.975 11:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:15.975 11:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:15.975 11:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:15.975 11:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:15.975 11:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:15.975 11:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1599399 00:28:15.975 11:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1599399 /var/tmp/bperf.sock 00:28:15.975 11:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:15.975 11:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1599399 ']' 00:28:15.975 11:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:15.975 11:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:15.975 11:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:15.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:15.975 11:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:15.975 11:16:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:15.975 [2024-07-26 11:16:35.357862] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:15.975 [2024-07-26 11:16:35.357910] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1599399 ] 00:28:15.975 EAL: No free 2048 kB hugepages reported on node 1 00:28:15.975 [2024-07-26 11:16:35.411479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.235 [2024-07-26 11:16:35.494616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:16.804 11:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:16.804 11:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:16.804 11:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:16.804 11:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:17.065 11:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:17.065 11:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.065 11:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:17.065 11:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.065 11:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:17.065 11:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:17.325 nvme0n1 00:28:17.325 11:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:17.325 11:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.325 11:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:17.325 11:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.325 11:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:17.325 11:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:17.585 Running I/O for 2 seconds... 00:28:17.585 [2024-07-26 11:16:36.897494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.585 [2024-07-26 11:16:36.897529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.585 [2024-07-26 11:16:36.897539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.585 [2024-07-26 11:16:36.911787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.585 [2024-07-26 11:16:36.911812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.585 [2024-07-26 11:16:36.911822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.585 [2024-07-26 11:16:36.920973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.585 [2024-07-26 11:16:36.920996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.585 [2024-07-26 11:16:36.921005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.585 [2024-07-26 11:16:36.930117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.585 [2024-07-26 11:16:36.930138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.585 [2024-07-26 11:16:36.930147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.585 [2024-07-26 11:16:36.939755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.585 [2024-07-26 11:16:36.939778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.585 [2024-07-26 11:16:36.939786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.585 [2024-07-26 11:16:36.949700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.585 [2024-07-26 11:16:36.949721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.585 [2024-07-26 11:16:36.949730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.585 [2024-07-26 11:16:36.959752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.586 [2024-07-26 11:16:36.959777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.586 [2024-07-26 11:16:36.959786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.586 [2024-07-26 11:16:36.968236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.586 [2024-07-26 11:16:36.968258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.586 [2024-07-26 11:16:36.968267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.586 [2024-07-26 11:16:36.979457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.586 [2024-07-26 11:16:36.979478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.586 [2024-07-26 11:16:36.979487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.586 [2024-07-26 11:16:36.987894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.586 [2024-07-26 11:16:36.987915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.586 [2024-07-26 11:16:36.987923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.586 [2024-07-26 11:16:36.999151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.586 [2024-07-26 11:16:36.999172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.586 [2024-07-26 11:16:36.999180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.586 [2024-07-26 11:16:37.007294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.586 [2024-07-26 11:16:37.007314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.586 [2024-07-26 11:16:37.007323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.586 [2024-07-26 11:16:37.017122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.586 [2024-07-26 11:16:37.017143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:25245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.586 [2024-07-26 11:16:37.017151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.586 [2024-07-26 11:16:37.026428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.586 [2024-07-26 11:16:37.026449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.586 [2024-07-26 11:16:37.026457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.586 [2024-07-26 11:16:37.035705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.586 [2024-07-26 11:16:37.035726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:71 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.586 [2024-07-26 11:16:37.035734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.586 [2024-07-26 11:16:37.045012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.586 [2024-07-26 11:16:37.045032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.586 [2024-07-26 11:16:37.045040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.586 [2024-07-26 11:16:37.054358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.586 [2024-07-26 11:16:37.054378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.586 [2024-07-26 11:16:37.054386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.586 [2024-07-26 11:16:37.063265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.586 [2024-07-26 11:16:37.063285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.586 [2024-07-26 11:16:37.063294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.586 [2024-07-26 11:16:37.073209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.586 [2024-07-26 11:16:37.073230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.586 [2024-07-26 11:16:37.073239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.847 [2024-07-26 11:16:37.083452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.847 [2024-07-26 11:16:37.083473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.847 [2024-07-26 11:16:37.083481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.847 [2024-07-26 11:16:37.091654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.847 [2024-07-26 11:16:37.091674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.847 [2024-07-26 11:16:37.091682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.847 [2024-07-26 11:16:37.101889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.847 [2024-07-26 11:16:37.101909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.847 [2024-07-26 11:16:37.101917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.847 [2024-07-26 11:16:37.110855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.847 [2024-07-26 11:16:37.110876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.847 [2024-07-26 11:16:37.110884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.847 [2024-07-26 11:16:37.120819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.847 [2024-07-26 11:16:37.120840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.847 [2024-07-26 11:16:37.120851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.847 [2024-07-26 11:16:37.129920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.847 [2024-07-26 11:16:37.129940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.847 [2024-07-26 11:16:37.129948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.847 [2024-07-26 11:16:37.139741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.847 [2024-07-26 11:16:37.139761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.847 [2024-07-26 11:16:37.139769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.847 [2024-07-26 11:16:37.148150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.847 [2024-07-26 11:16:37.148170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.847 [2024-07-26 11:16:37.148178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.847 [2024-07-26 11:16:37.158566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.847 [2024-07-26 11:16:37.158587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.847 [2024-07-26 11:16:37.158595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.847 [2024-07-26 11:16:37.167136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.847 [2024-07-26 11:16:37.167156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.847 [2024-07-26 11:16:37.167165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.847 [2024-07-26 11:16:37.177317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.847 [2024-07-26 11:16:37.177337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.847 [2024-07-26 11:16:37.177346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.847 [2024-07-26 11:16:37.186055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.847 [2024-07-26 11:16:37.186075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.847 [2024-07-26 11:16:37.186083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.847 [2024-07-26 11:16:37.196803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.847 [2024-07-26 11:16:37.196824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.847 [2024-07-26 11:16:37.196832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.847 [2024-07-26 11:16:37.204878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.847 [2024-07-26 11:16:37.204902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.847 [2024-07-26 11:16:37.204910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.847 [2024-07-26 11:16:37.215801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.847 [2024-07-26 11:16:37.215822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.847 [2024-07-26 11:16:37.215830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.847 [2024-07-26 11:16:37.223976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.847 [2024-07-26 11:16:37.223995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.847 [2024-07-26 11:16:37.224004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.847 [2024-07-26 11:16:37.234422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.847 [2024-07-26 11:16:37.234443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.847 [2024-07-26 11:16:37.234451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.847 [2024-07-26 11:16:37.242845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.847 [2024-07-26 11:16:37.242866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.847 [2024-07-26 11:16:37.242875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.847 [2024-07-26 11:16:37.252183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.847 [2024-07-26 11:16:37.252203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.847 [2024-07-26 11:16:37.252211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.847 [2024-07-26 11:16:37.270222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.847 [2024-07-26 11:16:37.270242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.847 [2024-07-26 11:16:37.270251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.847 [2024-07-26 11:16:37.279706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.847 [2024-07-26 11:16:37.279727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.847 [2024-07-26 11:16:37.279735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.847 [2024-07-26 11:16:37.289368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.847 [2024-07-26 11:16:37.289389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.847 [2024-07-26 11:16:37.289398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.847 [2024-07-26 11:16:37.298925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.847 [2024-07-26 11:16:37.298945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.847 [2024-07-26 11:16:37.298954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.847 [2024-07-26 11:16:37.309291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.847 [2024-07-26 11:16:37.309311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.847 [2024-07-26 11:16:37.309319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.847 [2024-07-26 11:16:37.318442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.848 [2024-07-26 11:16:37.318463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.848 [2024-07-26 11:16:37.318471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.848 [2024-07-26 11:16:37.328191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.848 [2024-07-26 11:16:37.328220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.848 [2024-07-26 11:16:37.328228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.848 [2024-07-26 11:16:37.337322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:17.848 [2024-07-26 11:16:37.337343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.848 [2024-07-26 11:16:37.337351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.108 [2024-07-26 11:16:37.348092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.108 [2024-07-26 11:16:37.348112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.108 [2024-07-26 11:16:37.348120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.108 [2024-07-26 11:16:37.363115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.108 [2024-07-26 11:16:37.363135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.108 [2024-07-26 11:16:37.363143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.108 [2024-07-26 11:16:37.372150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.108 [2024-07-26 11:16:37.372171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.108 [2024-07-26 11:16:37.372178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.108 [2024-07-26 11:16:37.381554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.108 [2024-07-26 11:16:37.381575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.109 [2024-07-26 11:16:37.381587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.109 [2024-07-26 11:16:37.391656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.109 [2024-07-26 11:16:37.391676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.109 [2024-07-26 11:16:37.391685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.109 [2024-07-26 11:16:37.400424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.109 [2024-07-26 11:16:37.400444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.109 [2024-07-26 11:16:37.400452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.109 [2024-07-26 11:16:37.410559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.109 [2024-07-26 11:16:37.410580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.109 [2024-07-26 11:16:37.410588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.109 [2024-07-26 11:16:37.419240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.109 [2024-07-26 11:16:37.419260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.109 [2024-07-26 11:16:37.419268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.109 [2024-07-26 11:16:37.428920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.109 [2024-07-26 11:16:37.428940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.109 [2024-07-26 11:16:37.428948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.109 [2024-07-26 11:16:37.438083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.109 [2024-07-26 11:16:37.438103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.109 [2024-07-26 11:16:37.438111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.109 [2024-07-26 11:16:37.447626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.109 [2024-07-26 11:16:37.447646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.109 [2024-07-26 11:16:37.447654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.109 [2024-07-26 11:16:37.457467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.109 [2024-07-26 11:16:37.457487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.109 [2024-07-26 11:16:37.457495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.109 [2024-07-26 11:16:37.466369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.109 [2024-07-26 11:16:37.466393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.109 [2024-07-26 11:16:37.466401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.109 [2024-07-26 11:16:37.476131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.109 [2024-07-26 11:16:37.476151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.109 [2024-07-26 11:16:37.476160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.109 [2024-07-26 11:16:37.485329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.109 [2024-07-26 11:16:37.485348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.109 [2024-07-26 11:16:37.485356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.109 [2024-07-26 11:16:37.495443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.109 [2024-07-26 11:16:37.495462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.109 [2024-07-26 11:16:37.495470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.109 [2024-07-26 11:16:37.504803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.109 [2024-07-26 11:16:37.504823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.109 [2024-07-26 11:16:37.504832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.109 [2024-07-26 11:16:37.513743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.109 [2024-07-26 11:16:37.513762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.109 [2024-07-26 11:16:37.513770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.109 [2024-07-26 11:16:37.523180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.109 [2024-07-26 11:16:37.523201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.109 [2024-07-26 11:16:37.523209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.109 [2024-07-26 11:16:37.531480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.109 [2024-07-26 11:16:37.531500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.109 [2024-07-26 11:16:37.531508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.109 [2024-07-26 11:16:37.541964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.109 [2024-07-26 11:16:37.541984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.109 [2024-07-26 11:16:37.541992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.109 [2024-07-26 11:16:37.551106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.109 [2024-07-26 11:16:37.551126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.109 [2024-07-26 11:16:37.551135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.109 [2024-07-26 11:16:37.560884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.109 [2024-07-26 11:16:37.560904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.109 [2024-07-26 11:16:37.560912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.109 [2024-07-26 11:16:37.570095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.109 [2024-07-26 11:16:37.570115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.109 [2024-07-26 11:16:37.570123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.109 [2024-07-26 11:16:37.579080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.109 [2024-07-26 11:16:37.579100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.109 [2024-07-26 11:16:37.579108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.109 [2024-07-26 11:16:37.589227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.109 [2024-07-26 11:16:37.589247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.109 [2024-07-26 11:16:37.589255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.109 [2024-07-26 11:16:37.598410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.109 [2024-07-26 11:16:37.598430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.109 [2024-07-26 11:16:37.598438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.370 [2024-07-26 11:16:37.607748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.370 [2024-07-26 11:16:37.607768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.371 [2024-07-26 11:16:37.607777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.371 [2024-07-26 11:16:37.617615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.371 [2024-07-26 11:16:37.617635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.371 [2024-07-26 11:16:37.617643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.371 [2024-07-26 11:16:37.626696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.371 [2024-07-26 11:16:37.626719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.371 [2024-07-26 11:16:37.626728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.371 [2024-07-26 11:16:37.636482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.371 [2024-07-26 11:16:37.636502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.371 [2024-07-26 11:16:37.636510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.371 [2024-07-26 11:16:37.645481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.371 [2024-07-26 11:16:37.645502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.371 [2024-07-26 11:16:37.645510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.371 [2024-07-26 11:16:37.655082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.371 [2024-07-26 11:16:37.655101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.371 [2024-07-26 11:16:37.655110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.371 [2024-07-26 11:16:37.664813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.371 [2024-07-26 11:16:37.664834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.371 [2024-07-26 11:16:37.664842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.371 [2024-07-26 11:16:37.674661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.371 [2024-07-26 11:16:37.674682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.371 [2024-07-26 11:16:37.674690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.371 [2024-07-26 11:16:37.683252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.371 [2024-07-26 11:16:37.683271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.371 [2024-07-26 11:16:37.683279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.371 [2024-07-26 11:16:37.692981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.371 [2024-07-26 11:16:37.693002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.371 [2024-07-26 11:16:37.693010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.371 [2024-07-26 11:16:37.703600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.371 [2024-07-26 11:16:37.703622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.371 [2024-07-26 11:16:37.703630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.371 [2024-07-26 11:16:37.713616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.371 [2024-07-26 11:16:37.713636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.371 [2024-07-26 11:16:37.713644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.371 [2024-07-26 11:16:37.722430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.371 [2024-07-26 11:16:37.722451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.371 [2024-07-26 11:16:37.722459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.371 [2024-07-26 11:16:37.732526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.371 [2024-07-26 11:16:37.732547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.371 [2024-07-26 11:16:37.732555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.371 [2024-07-26 11:16:37.741817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.371 [2024-07-26 11:16:37.741837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.371 [2024-07-26 11:16:37.741845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.371 [2024-07-26 11:16:37.750300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.371 [2024-07-26 11:16:37.750320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:7194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.371 [2024-07-26 11:16:37.750329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.371 [2024-07-26 11:16:37.760269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.371 [2024-07-26 11:16:37.760289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:7377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.371 [2024-07-26 11:16:37.760298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.371 [2024-07-26 11:16:37.769208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.371 [2024-07-26 11:16:37.769228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.371 [2024-07-26 11:16:37.769236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.371 [2024-07-26 11:16:37.778641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.371 [2024-07-26 11:16:37.778661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.371 [2024-07-26 11:16:37.778669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.371 [2024-07-26 11:16:37.787748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.371 [2024-07-26 11:16:37.787768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.371 [2024-07-26 11:16:37.787780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.371 [2024-07-26 11:16:37.797325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.371 [2024-07-26 11:16:37.797346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.371 [2024-07-26 11:16:37.797354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.371 [2024-07-26 11:16:37.806641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.371 [2024-07-26 11:16:37.806660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.371 [2024-07-26 11:16:37.806668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.371 [2024-07-26 11:16:37.816203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.371 [2024-07-26 11:16:37.816223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.371 [2024-07-26 11:16:37.816231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.371 [2024-07-26 11:16:37.824870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.371 [2024-07-26 11:16:37.824890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.371 [2024-07-26 11:16:37.824899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.371 [2024-07-26 11:16:37.835242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.371 [2024-07-26 11:16:37.835262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.371 [2024-07-26 11:16:37.835270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.371 [2024-07-26 11:16:37.844600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.371 [2024-07-26 11:16:37.844621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.371 [2024-07-26 11:16:37.844629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.371 [2024-07-26 11:16:37.852967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.371 [2024-07-26 11:16:37.852987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.371 [2024-07-26 11:16:37.852995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.371 [2024-07-26 11:16:37.862917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.372 [2024-07-26 11:16:37.862937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.372 [2024-07-26 11:16:37.862945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.633 [2024-07-26 11:16:37.872361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.633 [2024-07-26 11:16:37.872385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.633 [2024-07-26 11:16:37.872394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.633 [2024-07-26 11:16:37.881704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.633 [2024-07-26 11:16:37.881725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.633 [2024-07-26 11:16:37.881737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.633 [2024-07-26 11:16:37.891509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.633 [2024-07-26 11:16:37.891530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.633 [2024-07-26 11:16:37.891538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.633 [2024-07-26 11:16:37.901017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.633 [2024-07-26 11:16:37.901038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.633 [2024-07-26 11:16:37.901050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.633 [2024-07-26 11:16:37.909579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.633 [2024-07-26 11:16:37.909599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.633 [2024-07-26 11:16:37.909607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.633 [2024-07-26 11:16:37.919650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.633 [2024-07-26 11:16:37.919670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.633 [2024-07-26 11:16:37.919678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.633 [2024-07-26 11:16:37.928590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.633 [2024-07-26 11:16:37.928610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.633 [2024-07-26 11:16:37.928618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.633 [2024-07-26 11:16:37.939090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.633 [2024-07-26 11:16:37.939111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.633 [2024-07-26 11:16:37.939119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.633 [2024-07-26 11:16:37.947835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.633 [2024-07-26 11:16:37.947855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.633 [2024-07-26 11:16:37.947863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.633 [2024-07-26 11:16:37.958005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.633 [2024-07-26 11:16:37.958025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.633 [2024-07-26 11:16:37.958033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.633 [2024-07-26 11:16:37.966596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.633 [2024-07-26 11:16:37.966617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.633 [2024-07-26 11:16:37.966625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.633 [2024-07-26 11:16:37.976279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.633 [2024-07-26 11:16:37.976299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.633 [2024-07-26 11:16:37.976308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.633 [2024-07-26 11:16:37.985338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.633 [2024-07-26 11:16:37.985358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.633 [2024-07-26 11:16:37.985366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.633 [2024-07-26 11:16:37.995142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.633 [2024-07-26 11:16:37.995161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:25112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.633 [2024-07-26 11:16:37.995170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.633 [2024-07-26 11:16:38.003477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.633 [2024-07-26 11:16:38.003497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.633 [2024-07-26 11:16:38.003505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.633 [2024-07-26 11:16:38.013821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.633 [2024-07-26 11:16:38.013841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.633 [2024-07-26 11:16:38.013850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.633 [2024-07-26 11:16:38.023754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.633 [2024-07-26 11:16:38.023775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.633 [2024-07-26 11:16:38.023784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.633 [2024-07-26 11:16:38.032846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.633 [2024-07-26 11:16:38.032866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.633 [2024-07-26 11:16:38.032877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.633 [2024-07-26 11:16:38.042674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.633 [2024-07-26 11:16:38.042694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.633 [2024-07-26 11:16:38.042702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.633 [2024-07-26 11:16:38.051229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.633 [2024-07-26 11:16:38.051250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.634 [2024-07-26 11:16:38.051258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.634 [2024-07-26 11:16:38.061281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.634 [2024-07-26 11:16:38.061301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.634 [2024-07-26 11:16:38.061308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.634 [2024-07-26 11:16:38.070465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.634 [2024-07-26 11:16:38.070485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.634 [2024-07-26 11:16:38.070493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.634 [2024-07-26 11:16:38.079974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.634 [2024-07-26 11:16:38.079994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.634 [2024-07-26 11:16:38.080002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.634 [2024-07-26 11:16:38.088912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.634 [2024-07-26 11:16:38.088932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.634 [2024-07-26 11:16:38.088939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.634 [2024-07-26 11:16:38.098425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.634 [2024-07-26 11:16:38.098445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.634 [2024-07-26 11:16:38.098454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.634 [2024-07-26 11:16:38.107817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.634 [2024-07-26 11:16:38.107838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.634 [2024-07-26 11:16:38.107846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.634 [2024-07-26 11:16:38.117590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.634 [2024-07-26 11:16:38.117616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:14474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.634 [2024-07-26 11:16:38.117624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.634 [2024-07-26 11:16:38.125978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.634 [2024-07-26 11:16:38.125999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.634 [2024-07-26 11:16:38.126007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.895 [2024-07-26 11:16:38.137530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.895 [2024-07-26 11:16:38.137550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.895 [2024-07-26 11:16:38.137558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.895 [2024-07-26 11:16:38.145400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.895 [2024-07-26 11:16:38.145420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.895 [2024-07-26 11:16:38.145429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.895 [2024-07-26 11:16:38.156257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.895 [2024-07-26 11:16:38.156278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.895 [2024-07-26 11:16:38.156285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.895 [2024-07-26 11:16:38.164783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.895 [2024-07-26 11:16:38.164803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.895 [2024-07-26 11:16:38.164812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.895 [2024-07-26 11:16:38.174686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.895 [2024-07-26 11:16:38.174707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.895 [2024-07-26 11:16:38.174715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.895 [2024-07-26 11:16:38.183790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.895 [2024-07-26 11:16:38.183810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.895 [2024-07-26 11:16:38.183818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.895 [2024-07-26 11:16:38.193162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.895 [2024-07-26 11:16:38.193182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.895 [2024-07-26 11:16:38.193190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.895 [2024-07-26 11:16:38.203006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.895 [2024-07-26 11:16:38.203026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.895 [2024-07-26 11:16:38.203034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.895 [2024-07-26 11:16:38.212235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.895 [2024-07-26 11:16:38.212255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.895 [2024-07-26 11:16:38.212264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.895 [2024-07-26 11:16:38.221496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.895 [2024-07-26 11:16:38.221516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.895 [2024-07-26 11:16:38.221524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.895 [2024-07-26 11:16:38.230621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.895 [2024-07-26 11:16:38.230641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.895 [2024-07-26 11:16:38.230649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.895 [2024-07-26 11:16:38.239670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.895 [2024-07-26 11:16:38.239691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.895 [2024-07-26 11:16:38.239699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.895 [2024-07-26 11:16:38.250022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.895 [2024-07-26 11:16:38.250047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.895 [2024-07-26 11:16:38.250056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.895 [2024-07-26 11:16:38.259175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.895 [2024-07-26 11:16:38.259197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.895 [2024-07-26 11:16:38.259206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.895 [2024-07-26 11:16:38.268596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.895 [2024-07-26 11:16:38.268619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.895 [2024-07-26 11:16:38.268627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.895 [2024-07-26 11:16:38.277031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.895 [2024-07-26 11:16:38.277061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.895 [2024-07-26 11:16:38.277070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.895 [2024-07-26 11:16:38.287369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.895 [2024-07-26 11:16:38.287392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.895 [2024-07-26 11:16:38.287400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.895 [2024-07-26 11:16:38.297031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.895 [2024-07-26 11:16:38.297058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.895 [2024-07-26 11:16:38.297067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.895 [2024-07-26 11:16:38.306450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.895 [2024-07-26 11:16:38.306472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.895 [2024-07-26 11:16:38.306480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.895 [2024-07-26 11:16:38.315341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.895 [2024-07-26 11:16:38.315363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.895 [2024-07-26 11:16:38.315371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.895 [2024-07-26 11:16:38.325312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.895 [2024-07-26 11:16:38.325332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.895 [2024-07-26 11:16:38.325340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.895 [2024-07-26 11:16:38.334701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.895 [2024-07-26 11:16:38.334721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.895 [2024-07-26 11:16:38.334730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.895 [2024-07-26 11:16:38.343968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.895 [2024-07-26 11:16:38.343988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.895 [2024-07-26 11:16:38.343996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.895 [2024-07-26 11:16:38.353840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.895 [2024-07-26 11:16:38.353861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.895 [2024-07-26 11:16:38.353869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.895 [2024-07-26 11:16:38.362996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.895 [2024-07-26 11:16:38.363016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.895 [2024-07-26 11:16:38.363024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.895 [2024-07-26 11:16:38.372676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.896 [2024-07-26 11:16:38.372696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.896 [2024-07-26 11:16:38.372704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.896 [2024-07-26 11:16:38.381561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:18.896 [2024-07-26 11:16:38.381582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.896 [2024-07-26 11:16:38.381590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.156 [2024-07-26 11:16:38.391554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.156 [2024-07-26 11:16:38.391575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.156 [2024-07-26 11:16:38.391583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.156 [2024-07-26 11:16:38.401611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.156 [2024-07-26 11:16:38.401632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.156 [2024-07-26 11:16:38.401640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.156 [2024-07-26 11:16:38.410179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.156 [2024-07-26 11:16:38.410199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.156 [2024-07-26 11:16:38.410207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.156 [2024-07-26 11:16:38.419503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.156 [2024-07-26 11:16:38.419524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.156 [2024-07-26 11:16:38.419532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.156 [2024-07-26 11:16:38.429216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.156 [2024-07-26 11:16:38.429236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.156 [2024-07-26 11:16:38.429245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.157 [2024-07-26 11:16:38.439613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.157 [2024-07-26 11:16:38.439634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.157 [2024-07-26 11:16:38.439646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.157 [2024-07-26 11:16:38.447599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.157 [2024-07-26 11:16:38.447619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.157 [2024-07-26 11:16:38.447627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.157 [2024-07-26 11:16:38.457272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.157 [2024-07-26 11:16:38.457293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.157 [2024-07-26 11:16:38.457301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.157 [2024-07-26 11:16:38.466545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.157 [2024-07-26 11:16:38.466566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.157 [2024-07-26 11:16:38.466575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.157 [2024-07-26 11:16:38.476252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.157 [2024-07-26 11:16:38.476272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.157 [2024-07-26 11:16:38.476281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.157 [2024-07-26 11:16:38.485598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.157 [2024-07-26 11:16:38.485619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.157 [2024-07-26 11:16:38.485627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.157 [2024-07-26 11:16:38.494984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.157 [2024-07-26 11:16:38.495004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.157 [2024-07-26 11:16:38.495013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.157 [2024-07-26 11:16:38.503962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.157 [2024-07-26 11:16:38.503983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.157 [2024-07-26 11:16:38.503992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.157 [2024-07-26 11:16:38.513462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.157 [2024-07-26 11:16:38.513482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.157 [2024-07-26 11:16:38.513491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.157 [2024-07-26 11:16:38.522935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.157 [2024-07-26 11:16:38.522959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.157 [2024-07-26 11:16:38.522967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.157 [2024-07-26 11:16:38.531717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.157 [2024-07-26 11:16:38.531738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.157 [2024-07-26 11:16:38.531746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.157 [2024-07-26 11:16:38.541636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.157 [2024-07-26 11:16:38.541657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.157 [2024-07-26 11:16:38.541665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.157 [2024-07-26 11:16:38.550436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.157 [2024-07-26 11:16:38.550457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.157 [2024-07-26 11:16:38.550465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.157 [2024-07-26 11:16:38.559731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.157 [2024-07-26 11:16:38.559752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.157 [2024-07-26 11:16:38.559760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.157 [2024-07-26 11:16:38.569719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.157 [2024-07-26 11:16:38.569740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.157 [2024-07-26 11:16:38.569748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.157 [2024-07-26 11:16:38.578958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.157 [2024-07-26 11:16:38.578978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.157 [2024-07-26 11:16:38.578987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.157 [2024-07-26 11:16:38.587923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.157 [2024-07-26 11:16:38.587944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.157 [2024-07-26 11:16:38.587952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.157 [2024-07-26 11:16:38.597473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.157 [2024-07-26 11:16:38.597492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.157 [2024-07-26 11:16:38.597501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.157 [2024-07-26 11:16:38.607031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.157 [2024-07-26 11:16:38.607059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.157 [2024-07-26 11:16:38.607067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.157 [2024-07-26 11:16:38.616173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.157 [2024-07-26 11:16:38.616194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.157 [2024-07-26 11:16:38.616202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.157 [2024-07-26 11:16:38.625657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.157 [2024-07-26 11:16:38.625678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.157 [2024-07-26 11:16:38.625686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.157 [2024-07-26 11:16:38.635055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.157 [2024-07-26 11:16:38.635076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.157 [2024-07-26 11:16:38.635084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.157 [2024-07-26 11:16:38.644130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.157 [2024-07-26 11:16:38.644150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.157 [2024-07-26 11:16:38.644158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.418 [2024-07-26 11:16:38.654268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.418 [2024-07-26 11:16:38.654289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.418 [2024-07-26 11:16:38.654297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.418 [2024-07-26 11:16:38.663221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.418 [2024-07-26 11:16:38.663241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.418 [2024-07-26 11:16:38.663249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.418 [2024-07-26 11:16:38.673136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.418 [2024-07-26 11:16:38.673157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.418 [2024-07-26 11:16:38.673165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.418 [2024-07-26 11:16:38.682525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.418 [2024-07-26 11:16:38.682546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.418 [2024-07-26 11:16:38.682558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.418 [2024-07-26 11:16:38.691648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.418 [2024-07-26 11:16:38.691667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.418 [2024-07-26 11:16:38.691675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.418 [2024-07-26 11:16:38.700658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.418 [2024-07-26 11:16:38.700678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.418 [2024-07-26 11:16:38.700687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.418 [2024-07-26 11:16:38.710807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.418 [2024-07-26 11:16:38.710827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.418 [2024-07-26 11:16:38.710835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.418 [2024-07-26 11:16:38.719416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.418 [2024-07-26 11:16:38.719437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.418 [2024-07-26 11:16:38.719445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.418 [2024-07-26 11:16:38.729398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.418 [2024-07-26 11:16:38.729418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.418 [2024-07-26 11:16:38.729426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.418 [2024-07-26 11:16:38.739457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.418 [2024-07-26 11:16:38.739477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.418 [2024-07-26 11:16:38.739486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.418 [2024-07-26 11:16:38.748106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.418 [2024-07-26 11:16:38.748126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.418 [2024-07-26 11:16:38.748134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.418 [2024-07-26 11:16:38.757732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.418 [2024-07-26 11:16:38.757752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.418 [2024-07-26 11:16:38.757760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.418 [2024-07-26 11:16:38.768413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.418 [2024-07-26 11:16:38.768437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.418 [2024-07-26 11:16:38.768445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.418 [2024-07-26 11:16:38.777976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.418 [2024-07-26 11:16:38.777996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.418 [2024-07-26 11:16:38.778004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.418 [2024-07-26 11:16:38.788093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.418 [2024-07-26 11:16:38.788113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.419 [2024-07-26 11:16:38.788121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.419 [2024-07-26 11:16:38.796973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.419 [2024-07-26 11:16:38.796994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.419 [2024-07-26 11:16:38.797003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.419 [2024-07-26 11:16:38.808584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.419 [2024-07-26 11:16:38.808604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.419 [2024-07-26 11:16:38.808611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.419 [2024-07-26 11:16:38.821951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.419 [2024-07-26 11:16:38.821971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.419 [2024-07-26 11:16:38.821980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.419 [2024-07-26 11:16:38.831310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.419 [2024-07-26 11:16:38.831330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.419 [2024-07-26 11:16:38.831338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.419 [2024-07-26 11:16:38.841238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.419 [2024-07-26 11:16:38.841258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.419 [2024-07-26 11:16:38.841266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.419 [2024-07-26 11:16:38.850883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.419 [2024-07-26 11:16:38.850902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.419 [2024-07-26 11:16:38.850911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.419 [2024-07-26 11:16:38.859452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.419 [2024-07-26 11:16:38.859473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.419 [2024-07-26 11:16:38.859481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.419 [2024-07-26 11:16:38.870093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5794f0) 00:28:19.419 [2024-07-26 11:16:38.870113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.419 [2024-07-26 11:16:38.870121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:19.679 00:28:19.679 Latency(us) 00:28:19.679 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.679 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:19.679 nvme0n1 : 2.05 25815.32 100.84 0.00 0.00 4852.32 2678.43 46502.07 00:28:19.679 =================================================================================================================== 00:28:19.679 Total : 25815.32 100.84 0.00 0.00 4852.32 2678.43 46502.07 00:28:19.679 0 00:28:19.679 11:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:19.679 11:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:19.679 11:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:19.679 | .driver_specific 00:28:19.679 | .nvme_error 00:28:19.679 | .status_code 00:28:19.679 | .command_transient_transport_error' 00:28:19.679 11:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:19.679 11:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 207 > 0 )) 00:28:19.679 11:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1599399 00:28:19.679 11:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1599399 ']' 00:28:19.679 11:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1599399 00:28:19.679 11:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:19.679 11:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:19.679 11:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1599399 00:28:19.679 11:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:19.679 11:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:19.679 11:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1599399' 00:28:19.679 killing process with pid 1599399 00:28:19.679 11:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1599399 00:28:19.679 Received shutdown signal, test time was about 2.000000 seconds 00:28:19.679 00:28:19.679 Latency(us) 00:28:19.679 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.679 =================================================================================================================== 00:28:19.679 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:19.679 11:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1599399 00:28:19.940 11:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:19.940 11:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:19.940 11:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:19.940 11:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:19.940 11:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:19.940 11:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1600096 00:28:19.940 11:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1600096 /var/tmp/bperf.sock 00:28:19.940 11:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:19.940 11:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1600096 ']' 00:28:19.940 11:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:19.940 11:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:19.940 11:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:19.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:19.940 11:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:19.940 11:16:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:19.940 [2024-07-26 11:16:39.389650] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:19.940 [2024-07-26 11:16:39.389701] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1600096 ] 00:28:19.940 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:19.940 Zero copy mechanism will not be used. 00:28:19.940 EAL: No free 2048 kB hugepages reported on node 1 00:28:20.200 [2024-07-26 11:16:39.444153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.200 [2024-07-26 11:16:39.515006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:20.770 11:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:20.770 11:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:20.770 11:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:20.770 11:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:21.030 11:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:21.030 11:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.030 11:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:21.030 11:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.030 11:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:21.030 11:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:21.290 nvme0n1 00:28:21.549 11:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:21.549 11:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.549 11:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:21.549 11:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.549 11:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:21.549 11:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:21.549 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:21.549 Zero copy mechanism will not be used. 00:28:21.549 Running I/O for 2 seconds... 00:28:21.550 [2024-07-26 11:16:40.925980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:21.550 [2024-07-26 11:16:40.926013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.550 [2024-07-26 11:16:40.926023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.550 [2024-07-26 11:16:40.942238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:21.550 [2024-07-26 11:16:40.942272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.550 [2024-07-26 11:16:40.942281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.550 [2024-07-26 11:16:40.957618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:21.550 [2024-07-26 11:16:40.957641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.550 [2024-07-26 11:16:40.957649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.550 [2024-07-26 11:16:40.982599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:21.550 [2024-07-26 11:16:40.982620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.550 [2024-07-26 11:16:40.982629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.550 [2024-07-26 11:16:40.998431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:21.550 [2024-07-26 11:16:40.998451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.550 [2024-07-26 11:16:40.998459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.550 [2024-07-26 11:16:41.022951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:21.550 [2024-07-26 11:16:41.022971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.550 [2024-07-26 11:16:41.022979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.550 [2024-07-26 11:16:41.039205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:21.550 [2024-07-26 11:16:41.039230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.550 [2024-07-26 11:16:41.039238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.811 [2024-07-26 11:16:41.063847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:21.811 [2024-07-26 11:16:41.063868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.811 [2024-07-26 11:16:41.063876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.811 [2024-07-26 11:16:41.079493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:21.811 [2024-07-26 11:16:41.079514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.811 [2024-07-26 11:16:41.079522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.811 [2024-07-26 11:16:41.104491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:21.811 [2024-07-26 11:16:41.104512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.811 [2024-07-26 11:16:41.104520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.811 [2024-07-26 11:16:41.119995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:21.811 [2024-07-26 11:16:41.120017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.811 [2024-07-26 11:16:41.120025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.811 [2024-07-26 11:16:41.134515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:21.811 [2024-07-26 11:16:41.134535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.811 [2024-07-26 11:16:41.134543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.811 [2024-07-26 11:16:41.149017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:21.811 [2024-07-26 11:16:41.149040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.811 [2024-07-26 11:16:41.149055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.811 [2024-07-26 11:16:41.163610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:21.811 [2024-07-26 11:16:41.163631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.811 [2024-07-26 11:16:41.163640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.811 [2024-07-26 11:16:41.178054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:21.811 [2024-07-26 11:16:41.178075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.811 [2024-07-26 11:16:41.178083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.811 [2024-07-26 11:16:41.193216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:21.811 [2024-07-26 11:16:41.193237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.811 [2024-07-26 11:16:41.193245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.811 [2024-07-26 11:16:41.208065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:21.811 [2024-07-26 11:16:41.208085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.811 [2024-07-26 11:16:41.208093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.811 [2024-07-26 11:16:41.222817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:21.811 [2024-07-26 11:16:41.222838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.811 [2024-07-26 11:16:41.222847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.811 [2024-07-26 11:16:41.237385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:21.811 [2024-07-26 11:16:41.237407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.811 [2024-07-26 11:16:41.237415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.811 [2024-07-26 11:16:41.251949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:21.811 [2024-07-26 11:16:41.251969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.811 [2024-07-26 11:16:41.251977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.811 [2024-07-26 11:16:41.266758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:21.811 [2024-07-26 11:16:41.266779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.811 [2024-07-26 11:16:41.266787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.811 [2024-07-26 11:16:41.281306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:21.811 [2024-07-26 11:16:41.281328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.811 [2024-07-26 11:16:41.281336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.811 [2024-07-26 11:16:41.295913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:21.811 [2024-07-26 11:16:41.295935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.811 [2024-07-26 11:16:41.295942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.072 [2024-07-26 11:16:41.310520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.072 [2024-07-26 11:16:41.310542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.072 [2024-07-26 11:16:41.310557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.072 [2024-07-26 11:16:41.325603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.072 [2024-07-26 11:16:41.325625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.072 [2024-07-26 11:16:41.325632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.072 [2024-07-26 11:16:41.340100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.072 [2024-07-26 11:16:41.340120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.072 [2024-07-26 11:16:41.340128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.072 [2024-07-26 11:16:41.354559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.072 [2024-07-26 11:16:41.354581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.072 [2024-07-26 11:16:41.354588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.072 [2024-07-26 11:16:41.369335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.072 [2024-07-26 11:16:41.369357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.072 [2024-07-26 11:16:41.369365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.072 [2024-07-26 11:16:41.383785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.072 [2024-07-26 11:16:41.383806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.072 [2024-07-26 11:16:41.383814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.072 [2024-07-26 11:16:41.398353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.072 [2024-07-26 11:16:41.398375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.072 [2024-07-26 11:16:41.398383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.072 [2024-07-26 11:16:41.412902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.072 [2024-07-26 11:16:41.412924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.072 [2024-07-26 11:16:41.412932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.072 [2024-07-26 11:16:41.427319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.072 [2024-07-26 11:16:41.427340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.072 [2024-07-26 11:16:41.427348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.072 [2024-07-26 11:16:41.441913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.072 [2024-07-26 11:16:41.441934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.072 [2024-07-26 11:16:41.441942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.072 [2024-07-26 11:16:41.456725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.072 [2024-07-26 11:16:41.456746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.072 [2024-07-26 11:16:41.456754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.072 [2024-07-26 11:16:41.471666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.072 [2024-07-26 11:16:41.471687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.072 [2024-07-26 11:16:41.471695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.072 [2024-07-26 11:16:41.486368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.072 [2024-07-26 11:16:41.486390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.072 [2024-07-26 11:16:41.486397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.072 [2024-07-26 11:16:41.501020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.072 [2024-07-26 11:16:41.501040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.072 [2024-07-26 11:16:41.501054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.072 [2024-07-26 11:16:41.515573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.072 [2024-07-26 11:16:41.515594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.072 [2024-07-26 11:16:41.515603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.072 [2024-07-26 11:16:41.530095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.072 [2024-07-26 11:16:41.530115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.072 [2024-07-26 11:16:41.530124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.072 [2024-07-26 11:16:41.545208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.072 [2024-07-26 11:16:41.545229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.072 [2024-07-26 11:16:41.545237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.072 [2024-07-26 11:16:41.560428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.072 [2024-07-26 11:16:41.560449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.072 [2024-07-26 11:16:41.560461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.333 [2024-07-26 11:16:41.576336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.333 [2024-07-26 11:16:41.576356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.333 [2024-07-26 11:16:41.576364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.333 [2024-07-26 11:16:41.590712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.333 [2024-07-26 11:16:41.590733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.333 [2024-07-26 11:16:41.590741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.333 [2024-07-26 11:16:41.605016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.333 [2024-07-26 11:16:41.605037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.333 [2024-07-26 11:16:41.605051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.333 [2024-07-26 11:16:41.619650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.333 [2024-07-26 11:16:41.619671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.334 [2024-07-26 11:16:41.619679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.334 [2024-07-26 11:16:41.634112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.334 [2024-07-26 11:16:41.634134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.334 [2024-07-26 11:16:41.634142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.334 [2024-07-26 11:16:41.648526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.334 [2024-07-26 11:16:41.648547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.334 [2024-07-26 11:16:41.648555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.334 [2024-07-26 11:16:41.663157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.334 [2024-07-26 11:16:41.663176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.334 [2024-07-26 11:16:41.663184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.334 [2024-07-26 11:16:41.677550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.334 [2024-07-26 11:16:41.677569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.334 [2024-07-26 11:16:41.677577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.334 [2024-07-26 11:16:41.691967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.334 [2024-07-26 11:16:41.691991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.334 [2024-07-26 11:16:41.691999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.334 [2024-07-26 11:16:41.706911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.334 [2024-07-26 11:16:41.706931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.334 [2024-07-26 11:16:41.706939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.334 [2024-07-26 11:16:41.721423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.334 [2024-07-26 11:16:41.721443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.334 [2024-07-26 11:16:41.721451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.334 [2024-07-26 11:16:41.736137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.334 [2024-07-26 11:16:41.736157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.334 [2024-07-26 11:16:41.736165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.334 [2024-07-26 11:16:41.750745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.334 [2024-07-26 11:16:41.750764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.334 [2024-07-26 11:16:41.750773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.334 [2024-07-26 11:16:41.765168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.334 [2024-07-26 11:16:41.765187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.334 [2024-07-26 11:16:41.765195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.334 [2024-07-26 11:16:41.779674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.334 [2024-07-26 11:16:41.779695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.334 [2024-07-26 11:16:41.779702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.334 [2024-07-26 11:16:41.794131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.334 [2024-07-26 11:16:41.794150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.334 [2024-07-26 11:16:41.794158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.334 [2024-07-26 11:16:41.808528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.334 [2024-07-26 11:16:41.808548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.334 [2024-07-26 11:16:41.808555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.334 [2024-07-26 11:16:41.823266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.334 [2024-07-26 11:16:41.823286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.334 [2024-07-26 11:16:41.823295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.597 [2024-07-26 11:16:41.837696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.597 [2024-07-26 11:16:41.837716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.597 [2024-07-26 11:16:41.837724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.597 [2024-07-26 11:16:41.852307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.597 [2024-07-26 11:16:41.852328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.597 [2024-07-26 11:16:41.852335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.597 [2024-07-26 11:16:41.866717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.597 [2024-07-26 11:16:41.866738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.597 [2024-07-26 11:16:41.866746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.597 [2024-07-26 11:16:41.881210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.597 [2024-07-26 11:16:41.881231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.597 [2024-07-26 11:16:41.881239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.597 [2024-07-26 11:16:41.895811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.597 [2024-07-26 11:16:41.895830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.597 [2024-07-26 11:16:41.895838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.597 [2024-07-26 11:16:41.910450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.597 [2024-07-26 11:16:41.910470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.597 [2024-07-26 11:16:41.910478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.597 [2024-07-26 11:16:41.925068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.597 [2024-07-26 11:16:41.925088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.597 [2024-07-26 11:16:41.925097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.597 [2024-07-26 11:16:41.939467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.597 [2024-07-26 11:16:41.939488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.597 [2024-07-26 11:16:41.939499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.597 [2024-07-26 11:16:41.953872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.597 [2024-07-26 11:16:41.953893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.597 [2024-07-26 11:16:41.953902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.597 [2024-07-26 11:16:41.968608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.597 [2024-07-26 11:16:41.968629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.597 [2024-07-26 11:16:41.968637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.597 [2024-07-26 11:16:41.983063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.597 [2024-07-26 11:16:41.983084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.597 [2024-07-26 11:16:41.983092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.597 [2024-07-26 11:16:41.997693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.597 [2024-07-26 11:16:41.997714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.597 [2024-07-26 11:16:41.997722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.597 [2024-07-26 11:16:42.012134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.597 [2024-07-26 11:16:42.012154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.597 [2024-07-26 11:16:42.012162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.597 [2024-07-26 11:16:42.026761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.597 [2024-07-26 11:16:42.026780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.597 [2024-07-26 11:16:42.026789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.597 [2024-07-26 11:16:42.041202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.597 [2024-07-26 11:16:42.041222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.597 [2024-07-26 11:16:42.041230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.597 [2024-07-26 11:16:42.055615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.597 [2024-07-26 11:16:42.055635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.597 [2024-07-26 11:16:42.055643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.597 [2024-07-26 11:16:42.070221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.597 [2024-07-26 11:16:42.070241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.597 [2024-07-26 11:16:42.070249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.597 [2024-07-26 11:16:42.084960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.597 [2024-07-26 11:16:42.084980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.597 [2024-07-26 11:16:42.084989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.921 [2024-07-26 11:16:42.099598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.921 [2024-07-26 11:16:42.099619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.921 [2024-07-26 11:16:42.099627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.922 [2024-07-26 11:16:42.114084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.922 [2024-07-26 11:16:42.114105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.922 [2024-07-26 11:16:42.114113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.922 [2024-07-26 11:16:42.128495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.922 [2024-07-26 11:16:42.128516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.922 [2024-07-26 11:16:42.128524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.922 [2024-07-26 11:16:42.142900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.922 [2024-07-26 11:16:42.142920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.922 [2024-07-26 11:16:42.142928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.922 [2024-07-26 11:16:42.157330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.922 [2024-07-26 11:16:42.157350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.922 [2024-07-26 11:16:42.157358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.922 [2024-07-26 11:16:42.171738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.922 [2024-07-26 11:16:42.171758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.922 [2024-07-26 11:16:42.171766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.922 [2024-07-26 11:16:42.186345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.922 [2024-07-26 11:16:42.186365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.922 [2024-07-26 11:16:42.186376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.922 [2024-07-26 11:16:42.200765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.922 [2024-07-26 11:16:42.200786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.922 [2024-07-26 11:16:42.200794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.922 [2024-07-26 11:16:42.215180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.922 [2024-07-26 11:16:42.215199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.922 [2024-07-26 11:16:42.215207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.922 [2024-07-26 11:16:42.229891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.922 [2024-07-26 11:16:42.229911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.922 [2024-07-26 11:16:42.229919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.922 [2024-07-26 11:16:42.244301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.922 [2024-07-26 11:16:42.244321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.922 [2024-07-26 11:16:42.244330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.922 [2024-07-26 11:16:42.258805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.922 [2024-07-26 11:16:42.258825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.922 [2024-07-26 11:16:42.258834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.922 [2024-07-26 11:16:42.273243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.922 [2024-07-26 11:16:42.273262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.922 [2024-07-26 11:16:42.273271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.922 [2024-07-26 11:16:42.287836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.922 [2024-07-26 11:16:42.287857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.922 [2024-07-26 11:16:42.287865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.922 [2024-07-26 11:16:42.302368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.922 [2024-07-26 11:16:42.302390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.922 [2024-07-26 11:16:42.302398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.922 [2024-07-26 11:16:42.317009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.922 [2024-07-26 11:16:42.317033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.922 [2024-07-26 11:16:42.317047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.922 [2024-07-26 11:16:42.331438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.922 [2024-07-26 11:16:42.331459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.922 [2024-07-26 11:16:42.331467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.922 [2024-07-26 11:16:42.345876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.922 [2024-07-26 11:16:42.345897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.922 [2024-07-26 11:16:42.345905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.922 [2024-07-26 11:16:42.360403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.922 [2024-07-26 11:16:42.360423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.922 [2024-07-26 11:16:42.360430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.922 [2024-07-26 11:16:42.375029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.922 [2024-07-26 11:16:42.375056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.922 [2024-07-26 11:16:42.375065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.922 [2024-07-26 11:16:42.389494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:22.922 [2024-07-26 11:16:42.389516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.922 [2024-07-26 11:16:42.389524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.182 [2024-07-26 11:16:42.404177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:23.182 [2024-07-26 11:16:42.404197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.182 [2024-07-26 11:16:42.404206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.182 [2024-07-26 11:16:42.418809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:23.182 [2024-07-26 11:16:42.418829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.182 [2024-07-26 11:16:42.418837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.182 [2024-07-26 11:16:42.433469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:23.182 [2024-07-26 11:16:42.433490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.182 [2024-07-26 11:16:42.433509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.182 [2024-07-26 11:16:42.447908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:23.182 [2024-07-26 11:16:42.447929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.182 [2024-07-26 11:16:42.447937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.182 [2024-07-26 11:16:42.462621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:23.182 [2024-07-26 11:16:42.462641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.182 [2024-07-26 11:16:42.462649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.182 [2024-07-26 11:16:42.477069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:23.182 [2024-07-26 11:16:42.477088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.182 [2024-07-26 11:16:42.477096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.182 [2024-07-26 11:16:42.491475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:23.182 [2024-07-26 11:16:42.491496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.182 [2024-07-26 11:16:42.491504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.182 [2024-07-26 11:16:42.505878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:23.182 [2024-07-26 11:16:42.505898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.182 [2024-07-26 11:16:42.505906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.182 [2024-07-26 11:16:42.520480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:23.182 [2024-07-26 11:16:42.520500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.182 [2024-07-26 11:16:42.520508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.182 [2024-07-26 11:16:42.534864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:23.182 [2024-07-26 11:16:42.534885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.182 [2024-07-26 11:16:42.534893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.182 [2024-07-26 11:16:42.549767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:23.182 [2024-07-26 11:16:42.549787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.182 [2024-07-26 11:16:42.549795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.182 [2024-07-26 11:16:42.564203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:23.182 [2024-07-26 11:16:42.564228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.182 [2024-07-26 11:16:42.564237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.182 [2024-07-26 11:16:42.578873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:23.182 [2024-07-26 11:16:42.578895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.182 [2024-07-26 11:16:42.578903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.182 [2024-07-26 11:16:42.593478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:23.182 [2024-07-26 11:16:42.593499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.182 [2024-07-26 11:16:42.593507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.182 [2024-07-26 11:16:42.607995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:23.182 [2024-07-26 11:16:42.608016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.182 [2024-07-26 11:16:42.608024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.182 [2024-07-26 11:16:42.622435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:23.182 [2024-07-26 11:16:42.622455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.182 [2024-07-26 11:16:42.622463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.182 [2024-07-26 11:16:42.637421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:23.182 [2024-07-26 11:16:42.637443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.182 [2024-07-26 11:16:42.637451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.182 [2024-07-26 11:16:42.652565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:23.182 [2024-07-26 11:16:42.652586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.182 [2024-07-26 11:16:42.652594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.182 [2024-07-26 11:16:42.667676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:23.182 [2024-07-26 11:16:42.667697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.182 [2024-07-26 11:16:42.667705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.441 [2024-07-26 11:16:42.682719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:23.441 [2024-07-26 11:16:42.682740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.441 [2024-07-26 11:16:42.682748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.441 [2024-07-26 11:16:42.697562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:23.441 [2024-07-26 11:16:42.697583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.441 [2024-07-26 11:16:42.697592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.441 [2024-07-26 11:16:42.712439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:23.441 [2024-07-26 11:16:42.712461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.441 [2024-07-26 11:16:42.712470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.441 [2024-07-26 11:16:42.727433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:23.441 [2024-07-26 11:16:42.727455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.441 [2024-07-26 11:16:42.727463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.441 [2024-07-26 11:16:42.742377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:23.441 [2024-07-26 11:16:42.742400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.441 [2024-07-26 11:16:42.742409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.441 [2024-07-26 11:16:42.757852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:23.441 [2024-07-26 11:16:42.757872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.441 [2024-07-26 11:16:42.757881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.441 [2024-07-26 11:16:42.772736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:23.441 [2024-07-26 11:16:42.772758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.441 [2024-07-26 11:16:42.772766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.441 [2024-07-26 11:16:42.787249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:23.441 [2024-07-26 11:16:42.787270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.441 [2024-07-26 11:16:42.787279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.441 [2024-07-26 11:16:42.801843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:23.441 [2024-07-26 11:16:42.801864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.441 [2024-07-26 11:16:42.801872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.441 [2024-07-26 11:16:42.816239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:23.441 [2024-07-26 11:16:42.816259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.441 [2024-07-26 11:16:42.816271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.441 [2024-07-26 11:16:42.830882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:23.441 [2024-07-26 11:16:42.830904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.441 [2024-07-26 11:16:42.830911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.441 [2024-07-26 11:16:42.845301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:23.441 [2024-07-26 11:16:42.845321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.441 [2024-07-26 11:16:42.845330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.441 [2024-07-26 11:16:42.859759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:23.441 [2024-07-26 11:16:42.859779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.441 [2024-07-26 11:16:42.859787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.441 [2024-07-26 11:16:42.874450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:23.441 [2024-07-26 11:16:42.874471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.441 [2024-07-26 11:16:42.874479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.441 [2024-07-26 11:16:42.889103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24c4030) 00:28:23.441 [2024-07-26 11:16:42.889123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.441 [2024-07-26 11:16:42.889131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.441 00:28:23.441 Latency(us) 00:28:23.441 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:23.441 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:23.441 nvme0n1 : 2.01 2047.97 256.00 0.00 0.00 7809.49 7066.49 29861.62 00:28:23.441 =================================================================================================================== 00:28:23.441 Total : 2047.97 256.00 0.00 0.00 7809.49 7066.49 29861.62 00:28:23.441 0 00:28:23.441 11:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:23.441 11:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:23.441 11:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:23.441 | .driver_specific 00:28:23.441 | .nvme_error 00:28:23.441 | .status_code 00:28:23.441 | .command_transient_transport_error' 00:28:23.441 11:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:23.701 11:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 132 > 0 )) 00:28:23.701 11:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1600096 00:28:23.701 11:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1600096 ']' 00:28:23.701 11:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1600096 00:28:23.701 11:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:23.701 11:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:23.701 11:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1600096 00:28:23.701 11:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:23.701 11:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:23.701 11:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1600096' 00:28:23.701 killing process with pid 1600096 00:28:23.701 11:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1600096 00:28:23.701 Received shutdown signal, test time was about 2.000000 seconds 00:28:23.701 00:28:23.701 Latency(us) 00:28:23.701 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:23.701 =================================================================================================================== 00:28:23.701 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:23.701 11:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1600096 00:28:23.961 11:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:23.961 11:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:23.961 11:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:23.961 11:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:23.961 11:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:23.961 11:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1600788 00:28:23.961 11:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1600788 /var/tmp/bperf.sock 00:28:23.961 11:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:23.961 11:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1600788 ']' 00:28:23.961 11:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:23.961 11:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:23.961 11:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:23.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:23.961 11:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:23.961 11:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:23.961 [2024-07-26 11:16:43.375319] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:23.961 [2024-07-26 11:16:43.375364] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1600788 ] 00:28:23.961 EAL: No free 2048 kB hugepages reported on node 1 00:28:23.961 [2024-07-26 11:16:43.442939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.220 [2024-07-26 11:16:43.516539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:24.787 11:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:24.787 11:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:24.787 11:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:24.787 11:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:25.045 11:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:25.045 11:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.045 11:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:25.045 11:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.045 11:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:25.045 11:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:25.304 nvme0n1 00:28:25.304 11:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:25.304 11:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.304 11:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:25.304 11:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.304 11:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:25.304 11:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:25.304 Running I/O for 2 seconds... 00:28:25.305 [2024-07-26 11:16:44.780754] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fef90 00:28:25.305 [2024-07-26 11:16:44.781953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.305 [2024-07-26 11:16:44.781984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:25.305 [2024-07-26 11:16:44.792001] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fda78 00:28:25.305 [2024-07-26 11:16:44.793314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.305 [2024-07-26 11:16:44.793337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.564 [2024-07-26 11:16:44.801370] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190ed920 00:28:25.564 [2024-07-26 11:16:44.802709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.564 [2024-07-26 11:16:44.802731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.564 [2024-07-26 11:16:44.810592] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f7538 00:28:25.564 [2024-07-26 11:16:44.811805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.564 [2024-07-26 11:16:44.811829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.564 [2024-07-26 11:16:44.819698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190ebb98 00:28:25.564 [2024-07-26 11:16:44.820918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.564 [2024-07-26 11:16:44.820938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.564 [2024-07-26 11:16:44.828781] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fda78 00:28:25.564 [2024-07-26 11:16:44.830157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.565 [2024-07-26 11:16:44.830176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.565 [2024-07-26 11:16:44.836591] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190ef6a8 00:28:25.565 [2024-07-26 11:16:44.839493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.565 [2024-07-26 11:16:44.839513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.565 [2024-07-26 11:16:44.851670] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190eff18 00:28:25.565 [2024-07-26 11:16:44.852843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.565 [2024-07-26 11:16:44.852862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:25.565 [2024-07-26 11:16:44.860541] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f7970 00:28:25.565 [2024-07-26 11:16:44.861439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.565 [2024-07-26 11:16:44.861458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:25.565 [2024-07-26 11:16:44.870028] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190ee5c8 00:28:25.565 [2024-07-26 11:16:44.871926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.565 [2024-07-26 11:16:44.871946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:25.565 [2024-07-26 11:16:44.880845] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fdeb0 00:28:25.565 [2024-07-26 11:16:44.881876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.565 [2024-07-26 11:16:44.881895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:25.565 [2024-07-26 11:16:44.889619] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190ec840 00:28:25.565 [2024-07-26 11:16:44.890528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.565 [2024-07-26 11:16:44.890547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:25.565 [2024-07-26 11:16:44.898611] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f57b0 00:28:25.565 [2024-07-26 11:16:44.899525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.565 [2024-07-26 11:16:44.899543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:25.565 [2024-07-26 11:16:44.908531] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f8a50 00:28:25.565 [2024-07-26 11:16:44.911513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.565 [2024-07-26 11:16:44.911533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:25.565 [2024-07-26 11:16:44.925530] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f9b30 00:28:25.565 [2024-07-26 11:16:44.926570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.565 [2024-07-26 11:16:44.926593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:25.565 [2024-07-26 11:16:44.935555] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f0bc0 00:28:25.565 [2024-07-26 11:16:44.936596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.565 [2024-07-26 11:16:44.936616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:25.565 [2024-07-26 11:16:44.945051] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f0bc0 00:28:25.565 [2024-07-26 11:16:44.945259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.565 [2024-07-26 11:16:44.945276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:25.565 [2024-07-26 11:16:44.954515] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f0bc0 00:28:25.565 [2024-07-26 11:16:44.954724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.565 [2024-07-26 11:16:44.954742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:25.565 [2024-07-26 11:16:44.963911] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f0bc0 00:28:25.565 [2024-07-26 11:16:44.964437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.565 [2024-07-26 11:16:44.964455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:25.565 [2024-07-26 11:16:44.973363] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f0bc0 00:28:25.565 [2024-07-26 11:16:44.973569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.565 [2024-07-26 11:16:44.973587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:25.565 [2024-07-26 11:16:44.982830] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f0bc0 00:28:25.565 [2024-07-26 11:16:44.983039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.565 [2024-07-26 11:16:44.983062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:25.565 [2024-07-26 11:16:44.992305] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f0bc0 00:28:25.565 [2024-07-26 11:16:44.992724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.565 [2024-07-26 11:16:44.992743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:25.565 [2024-07-26 11:16:45.001766] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f0bc0 00:28:25.565 [2024-07-26 11:16:45.001970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.565 [2024-07-26 11:16:45.001988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:25.565 [2024-07-26 11:16:45.011213] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f0bc0 00:28:25.565 [2024-07-26 11:16:45.011417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:18795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.565 [2024-07-26 11:16:45.011436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:25.565 [2024-07-26 11:16:45.020689] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f0bc0 00:28:25.565 [2024-07-26 11:16:45.020894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.565 [2024-07-26 11:16:45.020912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:25.565 [2024-07-26 11:16:45.030388] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f0788 00:28:25.565 [2024-07-26 11:16:45.033277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.565 [2024-07-26 11:16:45.033296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:25.565 [2024-07-26 11:16:45.044054] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f0ff8 00:28:25.565 [2024-07-26 11:16:45.045722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.565 [2024-07-26 11:16:45.045740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:25.565 [2024-07-26 11:16:45.053958] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f3a28 00:28:25.565 [2024-07-26 11:16:45.054921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.565 [2024-07-26 11:16:45.054941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:25.860 [2024-07-26 11:16:45.063316] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fbcf0 00:28:25.860 [2024-07-26 11:16:45.064259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.860 [2024-07-26 11:16:45.064279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:25.860 [2024-07-26 11:16:45.075117] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f9f68 00:28:25.860 [2024-07-26 11:16:45.076671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.860 [2024-07-26 11:16:45.076693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:25.860 [2024-07-26 11:16:45.089620] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fac10 00:28:25.860 [2024-07-26 11:16:45.090805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.860 [2024-07-26 11:16:45.090825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:25.860 [2024-07-26 11:16:45.099606] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f7da8 00:28:25.860 [2024-07-26 11:16:45.100565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:18040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.860 [2024-07-26 11:16:45.100584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:25.860 [2024-07-26 11:16:45.109159] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f7da8 00:28:25.860 [2024-07-26 11:16:45.109396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.860 [2024-07-26 11:16:45.109414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:25.860 [2024-07-26 11:16:45.118631] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f7da8 00:28:25.860 [2024-07-26 11:16:45.118859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.860 [2024-07-26 11:16:45.118878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:25.860 [2024-07-26 11:16:45.128311] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f7da8 00:28:25.860 [2024-07-26 11:16:45.128548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.860 [2024-07-26 11:16:45.128567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:25.860 [2024-07-26 11:16:45.137845] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f7da8 00:28:25.860 [2024-07-26 11:16:45.138081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.860 [2024-07-26 11:16:45.138100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:25.860 [2024-07-26 11:16:45.147320] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f7da8 00:28:25.860 [2024-07-26 11:16:45.147554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.860 [2024-07-26 11:16:45.147573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:25.860 [2024-07-26 11:16:45.156801] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f7da8 00:28:25.860 [2024-07-26 11:16:45.157039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.860 [2024-07-26 11:16:45.157062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:25.860 [2024-07-26 11:16:45.166298] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f7da8 00:28:25.860 [2024-07-26 11:16:45.166535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.860 [2024-07-26 11:16:45.166554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:25.860 [2024-07-26 11:16:45.175776] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f7da8 00:28:25.860 [2024-07-26 11:16:45.176011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.860 [2024-07-26 11:16:45.176029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:25.860 [2024-07-26 11:16:45.185241] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f7da8 00:28:25.860 [2024-07-26 11:16:45.185479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.860 [2024-07-26 11:16:45.185497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:25.860 [2024-07-26 11:16:45.194659] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f7da8 00:28:25.860 [2024-07-26 11:16:45.195342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.860 [2024-07-26 11:16:45.195361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:25.861 [2024-07-26 11:16:45.204175] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f7da8 00:28:25.861 [2024-07-26 11:16:45.204727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.861 [2024-07-26 11:16:45.204745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:25.861 [2024-07-26 11:16:45.213586] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f7da8 00:28:25.861 [2024-07-26 11:16:45.214332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.861 [2024-07-26 11:16:45.214352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:25.861 [2024-07-26 11:16:45.223084] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f7da8 00:28:25.861 [2024-07-26 11:16:45.223319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.861 [2024-07-26 11:16:45.223337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:25.861 [2024-07-26 11:16:45.233254] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f9f68 00:28:25.861 [2024-07-26 11:16:45.235665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.861 [2024-07-26 11:16:45.235684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:25.861 [2024-07-26 11:16:45.247132] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f7970 00:28:25.861 [2024-07-26 11:16:45.248477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.861 [2024-07-26 11:16:45.248496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:25.861 [2024-07-26 11:16:45.257348] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fac10 00:28:25.861 [2024-07-26 11:16:45.257576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.861 [2024-07-26 11:16:45.257596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:25.861 [2024-07-26 11:16:45.266861] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fac10 00:28:25.861 [2024-07-26 11:16:45.267070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.861 [2024-07-26 11:16:45.267097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:25.861 [2024-07-26 11:16:45.276303] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fac10 00:28:25.861 [2024-07-26 11:16:45.276514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.861 [2024-07-26 11:16:45.276533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:25.861 [2024-07-26 11:16:45.287675] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f8e88 00:28:25.861 [2024-07-26 11:16:45.289430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.861 [2024-07-26 11:16:45.289450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.861 [2024-07-26 11:16:45.299062] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f8e88 00:28:25.861 [2024-07-26 11:16:45.299936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.861 [2024-07-26 11:16:45.299955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:25.861 [2024-07-26 11:16:45.308562] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f8e88 00:28:25.861 [2024-07-26 11:16:45.308973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.861 [2024-07-26 11:16:45.308991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:25.861 [2024-07-26 11:16:45.321065] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f7970 00:28:25.861 [2024-07-26 11:16:45.322511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.861 [2024-07-26 11:16:45.322530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.861 [2024-07-26 11:16:45.332953] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb048 00:28:25.861 [2024-07-26 11:16:45.333873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.861 [2024-07-26 11:16:45.333893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:25.861 [2024-07-26 11:16:45.341868] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fcdd0 00:28:25.861 [2024-07-26 11:16:45.342826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.861 [2024-07-26 11:16:45.342849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:25.861 [2024-07-26 11:16:45.351071] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f1868 00:28:25.861 [2024-07-26 11:16:45.351913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:15713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.861 [2024-07-26 11:16:45.351934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.122 [2024-07-26 11:16:45.360444] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f7970 00:28:26.122 [2024-07-26 11:16:45.361299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.122 [2024-07-26 11:16:45.361320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.122 [2024-07-26 11:16:45.369536] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fcdd0 00:28:26.122 [2024-07-26 11:16:45.370387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.122 [2024-07-26 11:16:45.370408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.122 [2024-07-26 11:16:45.378789] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f1868 00:28:26.122 [2024-07-26 11:16:45.379642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.122 [2024-07-26 11:16:45.379664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.122 [2024-07-26 11:16:45.387894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f7970 00:28:26.122 [2024-07-26 11:16:45.388763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.122 [2024-07-26 11:16:45.388783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.122 [2024-07-26 11:16:45.397031] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fcdd0 00:28:26.122 [2024-07-26 11:16:45.397894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.122 [2024-07-26 11:16:45.397914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.122 [2024-07-26 11:16:45.406051] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f1868 00:28:26.122 [2024-07-26 11:16:45.407079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.122 [2024-07-26 11:16:45.407098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.122 [2024-07-26 11:16:45.415143] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f7970 00:28:26.122 [2024-07-26 11:16:45.415964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.122 [2024-07-26 11:16:45.415983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.122 [2024-07-26 11:16:45.424543] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fcdd0 00:28:26.122 [2024-07-26 11:16:45.425445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:25205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.122 [2024-07-26 11:16:45.425465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.122 [2024-07-26 11:16:45.433823] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f1868 00:28:26.122 [2024-07-26 11:16:45.434709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.122 [2024-07-26 11:16:45.434728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.122 [2024-07-26 11:16:45.442924] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f7970 00:28:26.122 [2024-07-26 11:16:45.443795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.122 [2024-07-26 11:16:45.443814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.122 [2024-07-26 11:16:45.452028] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fcdd0 00:28:26.122 [2024-07-26 11:16:45.452894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.122 [2024-07-26 11:16:45.452911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.122 [2024-07-26 11:16:45.461061] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f1868 00:28:26.122 [2024-07-26 11:16:45.461923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.122 [2024-07-26 11:16:45.461942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.122 [2024-07-26 11:16:45.470175] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f7970 00:28:26.122 [2024-07-26 11:16:45.471033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.122 [2024-07-26 11:16:45.471056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.122 [2024-07-26 11:16:45.479286] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fcdd0 00:28:26.122 [2024-07-26 11:16:45.480135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.122 [2024-07-26 11:16:45.480155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.122 [2024-07-26 11:16:45.488366] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f1868 00:28:26.122 [2024-07-26 11:16:45.489225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.122 [2024-07-26 11:16:45.489244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.122 [2024-07-26 11:16:45.497461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f7970 00:28:26.122 [2024-07-26 11:16:45.498342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.122 [2024-07-26 11:16:45.498361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.122 [2024-07-26 11:16:45.506544] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fcdd0 00:28:26.122 [2024-07-26 11:16:45.507412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:8225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.122 [2024-07-26 11:16:45.507432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.122 [2024-07-26 11:16:45.515628] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f1868 00:28:26.122 [2024-07-26 11:16:45.516485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.122 [2024-07-26 11:16:45.516505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.122 [2024-07-26 11:16:45.524732] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f7970 00:28:26.122 [2024-07-26 11:16:45.525587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.122 [2024-07-26 11:16:45.525607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.122 [2024-07-26 11:16:45.533836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fcdd0 00:28:26.122 [2024-07-26 11:16:45.534701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.122 [2024-07-26 11:16:45.534720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.122 [2024-07-26 11:16:45.542925] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f1868 00:28:26.122 [2024-07-26 11:16:45.543783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.122 [2024-07-26 11:16:45.543801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.122 [2024-07-26 11:16:45.552165] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f7970 00:28:26.122 [2024-07-26 11:16:45.553036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:9303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.122 [2024-07-26 11:16:45.553061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.122 [2024-07-26 11:16:45.561259] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fcdd0 00:28:26.122 [2024-07-26 11:16:45.562116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.122 [2024-07-26 11:16:45.562136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.122 [2024-07-26 11:16:45.570352] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f1868 00:28:26.122 [2024-07-26 11:16:45.571214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.122 [2024-07-26 11:16:45.571233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.122 [2024-07-26 11:16:45.579438] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f7970 00:28:26.122 [2024-07-26 11:16:45.580294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.123 [2024-07-26 11:16:45.580317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.123 [2024-07-26 11:16:45.588494] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fcdd0 00:28:26.123 [2024-07-26 11:16:45.589369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.123 [2024-07-26 11:16:45.589390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.123 [2024-07-26 11:16:45.597734] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f1868 00:28:26.123 [2024-07-26 11:16:45.598594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.123 [2024-07-26 11:16:45.598616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.123 [2024-07-26 11:16:45.606922] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f7970 00:28:26.123 [2024-07-26 11:16:45.607905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.123 [2024-07-26 11:16:45.607925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.383 [2024-07-26 11:16:45.616272] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fcdd0 00:28:26.383 [2024-07-26 11:16:45.617134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:9938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.383 [2024-07-26 11:16:45.617154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.383 [2024-07-26 11:16:45.625618] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f1868 00:28:26.383 [2024-07-26 11:16:45.626499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.383 [2024-07-26 11:16:45.626520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.383 [2024-07-26 11:16:45.634736] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f7970 00:28:26.383 [2024-07-26 11:16:45.635603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.383 [2024-07-26 11:16:45.635623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.383 [2024-07-26 11:16:45.643851] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fcdd0 00:28:26.383 [2024-07-26 11:16:45.644723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.383 [2024-07-26 11:16:45.644743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.383 [2024-07-26 11:16:45.652966] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f1868 00:28:26.383 [2024-07-26 11:16:45.653843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.383 [2024-07-26 11:16:45.653862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.383 [2024-07-26 11:16:45.662076] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f7970 00:28:26.383 [2024-07-26 11:16:45.662940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.383 [2024-07-26 11:16:45.662959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.383 [2024-07-26 11:16:45.671197] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fcdd0 00:28:26.383 [2024-07-26 11:16:45.672048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:8722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.383 [2024-07-26 11:16:45.672068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.383 [2024-07-26 11:16:45.680278] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f1868 00:28:26.383 [2024-07-26 11:16:45.681101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.383 [2024-07-26 11:16:45.681121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.383 [2024-07-26 11:16:45.689353] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f7970 00:28:26.383 [2024-07-26 11:16:45.690229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.383 [2024-07-26 11:16:45.690247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.383 [2024-07-26 11:16:45.698460] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fcdd0 00:28:26.383 [2024-07-26 11:16:45.699328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.383 [2024-07-26 11:16:45.699347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.383 [2024-07-26 11:16:45.707553] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f1868 00:28:26.383 [2024-07-26 11:16:45.708413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.383 [2024-07-26 11:16:45.708432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.383 [2024-07-26 11:16:45.716603] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f7970 00:28:26.383 [2024-07-26 11:16:45.717450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.383 [2024-07-26 11:16:45.717469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.383 [2024-07-26 11:16:45.725676] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fcdd0 00:28:26.383 [2024-07-26 11:16:45.726679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.383 [2024-07-26 11:16:45.726699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.383 [2024-07-26 11:16:45.734750] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f1868 00:28:26.383 [2024-07-26 11:16:45.735633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.383 [2024-07-26 11:16:45.735653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.383 [2024-07-26 11:16:45.743836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f7970 00:28:26.383 [2024-07-26 11:16:45.744696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.383 [2024-07-26 11:16:45.744716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.383 [2024-07-26 11:16:45.752920] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fcdd0 00:28:26.383 [2024-07-26 11:16:45.753761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.383 [2024-07-26 11:16:45.753780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.383 [2024-07-26 11:16:45.762020] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f1868 00:28:26.383 [2024-07-26 11:16:45.762886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.383 [2024-07-26 11:16:45.762905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.383 [2024-07-26 11:16:45.771123] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f7970 00:28:26.383 [2024-07-26 11:16:45.771992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.383 [2024-07-26 11:16:45.772012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.383 [2024-07-26 11:16:45.780198] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fcdd0 00:28:26.383 [2024-07-26 11:16:45.781056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.383 [2024-07-26 11:16:45.781075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.383 [2024-07-26 11:16:45.789264] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f1868 00:28:26.383 [2024-07-26 11:16:45.790120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.383 [2024-07-26 11:16:45.790139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.383 [2024-07-26 11:16:45.798351] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f7970 00:28:26.383 [2024-07-26 11:16:45.799209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.383 [2024-07-26 11:16:45.799228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.384 [2024-07-26 11:16:45.807571] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fcdd0 00:28:26.384 [2024-07-26 11:16:45.808400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.384 [2024-07-26 11:16:45.808418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.384 [2024-07-26 11:16:45.816648] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f1868 00:28:26.384 [2024-07-26 11:16:45.818771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.384 [2024-07-26 11:16:45.818792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:26.384 [2024-07-26 11:16:45.832608] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190f0350 00:28:26.384 [2024-07-26 11:16:45.834755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.384 [2024-07-26 11:16:45.834774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:26.384 [2024-07-26 11:16:45.846293] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.384 [2024-07-26 11:16:45.847424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.384 [2024-07-26 11:16:45.847444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.384 [2024-07-26 11:16:45.855855] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.384 [2024-07-26 11:16:45.856077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.384 [2024-07-26 11:16:45.856096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.384 [2024-07-26 11:16:45.865412] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.384 [2024-07-26 11:16:45.865632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:9351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.384 [2024-07-26 11:16:45.865650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.384 [2024-07-26 11:16:45.875074] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.384 [2024-07-26 11:16:45.875298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.384 [2024-07-26 11:16:45.875318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.643 [2024-07-26 11:16:45.884807] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.643 [2024-07-26 11:16:45.885028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.643 [2024-07-26 11:16:45.885052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.643 [2024-07-26 11:16:45.894333] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.643 [2024-07-26 11:16:45.894552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.643 [2024-07-26 11:16:45.894570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.643 [2024-07-26 11:16:45.903821] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.643 [2024-07-26 11:16:45.904041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.643 [2024-07-26 11:16:45.904064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.643 [2024-07-26 11:16:45.913336] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.643 [2024-07-26 11:16:45.913559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.643 [2024-07-26 11:16:45.913578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.643 [2024-07-26 11:16:45.923007] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.643 [2024-07-26 11:16:45.923236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.643 [2024-07-26 11:16:45.923255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.643 [2024-07-26 11:16:45.932558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.643 [2024-07-26 11:16:45.932777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.643 [2024-07-26 11:16:45.932795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.643 [2024-07-26 11:16:45.942075] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.643 [2024-07-26 11:16:45.942296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.643 [2024-07-26 11:16:45.942314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.643 [2024-07-26 11:16:45.951562] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.643 [2024-07-26 11:16:45.951780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.643 [2024-07-26 11:16:45.951799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.643 [2024-07-26 11:16:45.961037] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.643 [2024-07-26 11:16:45.961262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.643 [2024-07-26 11:16:45.961281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.643 [2024-07-26 11:16:45.970522] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.643 [2024-07-26 11:16:45.970739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.643 [2024-07-26 11:16:45.970757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.643 [2024-07-26 11:16:45.980008] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.643 [2024-07-26 11:16:45.980235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.643 [2024-07-26 11:16:45.980254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.643 [2024-07-26 11:16:45.989499] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.643 [2024-07-26 11:16:45.989712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.643 [2024-07-26 11:16:45.989730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.643 [2024-07-26 11:16:45.998980] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.643 [2024-07-26 11:16:45.999206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.643 [2024-07-26 11:16:45.999225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.643 [2024-07-26 11:16:46.008463] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.643 [2024-07-26 11:16:46.008683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:9780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.643 [2024-07-26 11:16:46.008701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.643 [2024-07-26 11:16:46.017963] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.643 [2024-07-26 11:16:46.018191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.643 [2024-07-26 11:16:46.018210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.643 [2024-07-26 11:16:46.027482] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.643 [2024-07-26 11:16:46.027703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.643 [2024-07-26 11:16:46.027722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.643 [2024-07-26 11:16:46.037041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.643 [2024-07-26 11:16:46.037265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.643 [2024-07-26 11:16:46.037284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.644 [2024-07-26 11:16:46.046525] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.644 [2024-07-26 11:16:46.046745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.644 [2024-07-26 11:16:46.046764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.644 [2024-07-26 11:16:46.056019] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.644 [2024-07-26 11:16:46.056248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.644 [2024-07-26 11:16:46.056266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.644 [2024-07-26 11:16:46.065749] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.644 [2024-07-26 11:16:46.065968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.644 [2024-07-26 11:16:46.065986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.644 [2024-07-26 11:16:46.075269] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.644 [2024-07-26 11:16:46.075491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.644 [2024-07-26 11:16:46.075513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.644 [2024-07-26 11:16:46.084748] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.644 [2024-07-26 11:16:46.084968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.644 [2024-07-26 11:16:46.084987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.644 [2024-07-26 11:16:46.094249] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.644 [2024-07-26 11:16:46.094471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.644 [2024-07-26 11:16:46.094489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.644 [2024-07-26 11:16:46.103743] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.644 [2024-07-26 11:16:46.103963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:25087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.644 [2024-07-26 11:16:46.103981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.644 [2024-07-26 11:16:46.113251] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.644 [2024-07-26 11:16:46.113470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.644 [2024-07-26 11:16:46.113488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.644 [2024-07-26 11:16:46.122774] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.644 [2024-07-26 11:16:46.122997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.644 [2024-07-26 11:16:46.123016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.644 [2024-07-26 11:16:46.132353] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.644 [2024-07-26 11:16:46.132576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.644 [2024-07-26 11:16:46.132594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.903 [2024-07-26 11:16:46.142193] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.903 [2024-07-26 11:16:46.142417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.904 [2024-07-26 11:16:46.142437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.904 [2024-07-26 11:16:46.151745] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.904 [2024-07-26 11:16:46.151966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.904 [2024-07-26 11:16:46.151985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.904 [2024-07-26 11:16:46.161225] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.904 [2024-07-26 11:16:46.161444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.904 [2024-07-26 11:16:46.161466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.904 [2024-07-26 11:16:46.170729] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.904 [2024-07-26 11:16:46.170950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.904 [2024-07-26 11:16:46.170969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.904 [2024-07-26 11:16:46.180219] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.904 [2024-07-26 11:16:46.180438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.904 [2024-07-26 11:16:46.180457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.904 [2024-07-26 11:16:46.189681] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.904 [2024-07-26 11:16:46.189901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.904 [2024-07-26 11:16:46.189919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.904 [2024-07-26 11:16:46.199182] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.904 [2024-07-26 11:16:46.199402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.904 [2024-07-26 11:16:46.199420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.904 [2024-07-26 11:16:46.208635] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.904 [2024-07-26 11:16:46.208854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.904 [2024-07-26 11:16:46.208873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.904 [2024-07-26 11:16:46.218151] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.904 [2024-07-26 11:16:46.218373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.904 [2024-07-26 11:16:46.218391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.904 [2024-07-26 11:16:46.227641] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.904 [2024-07-26 11:16:46.227862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.904 [2024-07-26 11:16:46.227881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.904 [2024-07-26 11:16:46.237175] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.904 [2024-07-26 11:16:46.237393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.904 [2024-07-26 11:16:46.237411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.904 [2024-07-26 11:16:46.246648] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.904 [2024-07-26 11:16:46.246866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.904 [2024-07-26 11:16:46.246884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.904 [2024-07-26 11:16:46.256327] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.904 [2024-07-26 11:16:46.256548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.904 [2024-07-26 11:16:46.256566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.904 [2024-07-26 11:16:46.265822] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.904 [2024-07-26 11:16:46.266038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.904 [2024-07-26 11:16:46.266060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.904 [2024-07-26 11:16:46.275267] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.904 [2024-07-26 11:16:46.275487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.904 [2024-07-26 11:16:46.275505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.904 [2024-07-26 11:16:46.284744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.904 [2024-07-26 11:16:46.284966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.904 [2024-07-26 11:16:46.284985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.904 [2024-07-26 11:16:46.294270] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.904 [2024-07-26 11:16:46.294491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.904 [2024-07-26 11:16:46.294509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.904 [2024-07-26 11:16:46.303752] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.904 [2024-07-26 11:16:46.303970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.904 [2024-07-26 11:16:46.303988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.904 [2024-07-26 11:16:46.313245] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.904 [2024-07-26 11:16:46.313465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.904 [2024-07-26 11:16:46.313483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.904 [2024-07-26 11:16:46.322901] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.904 [2024-07-26 11:16:46.323121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.904 [2024-07-26 11:16:46.323139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.904 [2024-07-26 11:16:46.332443] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.904 [2024-07-26 11:16:46.332663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.904 [2024-07-26 11:16:46.332681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.904 [2024-07-26 11:16:46.341993] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.904 [2024-07-26 11:16:46.342220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.904 [2024-07-26 11:16:46.342239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.904 [2024-07-26 11:16:46.351518] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.904 [2024-07-26 11:16:46.351738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.904 [2024-07-26 11:16:46.351757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.904 [2024-07-26 11:16:46.361001] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.904 [2024-07-26 11:16:46.361230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.904 [2024-07-26 11:16:46.361248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.904 [2024-07-26 11:16:46.370521] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.904 [2024-07-26 11:16:46.370738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.904 [2024-07-26 11:16:46.370757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.904 [2024-07-26 11:16:46.379997] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.904 [2024-07-26 11:16:46.380222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.904 [2024-07-26 11:16:46.380241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:26.904 [2024-07-26 11:16:46.389582] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:26.904 [2024-07-26 11:16:46.389802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:26.904 [2024-07-26 11:16:46.389821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:27.164 [2024-07-26 11:16:46.399320] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:27.164 [2024-07-26 11:16:46.399545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.164 [2024-07-26 11:16:46.399564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:27.164 [2024-07-26 11:16:46.409162] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:27.164 [2024-07-26 11:16:46.409387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.164 [2024-07-26 11:16:46.409410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:27.164 [2024-07-26 11:16:46.418643] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:27.164 [2024-07-26 11:16:46.418867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.164 [2024-07-26 11:16:46.418886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:27.164 [2024-07-26 11:16:46.428169] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:27.164 [2024-07-26 11:16:46.428391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.164 [2024-07-26 11:16:46.428410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:27.164 [2024-07-26 11:16:46.437676] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:27.164 [2024-07-26 11:16:46.437896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.164 [2024-07-26 11:16:46.437915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:27.164 [2024-07-26 11:16:46.447177] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:27.164 [2024-07-26 11:16:46.447397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.164 [2024-07-26 11:16:46.447416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:27.164 [2024-07-26 11:16:46.456629] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:27.164 [2024-07-26 11:16:46.456848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.164 [2024-07-26 11:16:46.456867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:27.164 [2024-07-26 11:16:46.466164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:27.164 [2024-07-26 11:16:46.466383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.164 [2024-07-26 11:16:46.466401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:27.164 [2024-07-26 11:16:46.475652] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:27.164 [2024-07-26 11:16:46.475873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.165 [2024-07-26 11:16:46.475891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:27.165 [2024-07-26 11:16:46.485136] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:27.165 [2024-07-26 11:16:46.485357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.165 [2024-07-26 11:16:46.485375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:27.165 [2024-07-26 11:16:46.494616] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:27.165 [2024-07-26 11:16:46.494841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.165 [2024-07-26 11:16:46.494859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:27.165 [2024-07-26 11:16:46.504136] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:27.165 [2024-07-26 11:16:46.504362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.165 [2024-07-26 11:16:46.504380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:27.165 [2024-07-26 11:16:46.513602] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:27.165 [2024-07-26 11:16:46.513824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.165 [2024-07-26 11:16:46.513842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:27.165 [2024-07-26 11:16:46.523102] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:27.165 [2024-07-26 11:16:46.523321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.165 [2024-07-26 11:16:46.523340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:27.165 [2024-07-26 11:16:46.532640] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:27.165 [2024-07-26 11:16:46.532859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.165 [2024-07-26 11:16:46.532876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:27.165 [2024-07-26 11:16:46.542182] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:27.165 [2024-07-26 11:16:46.542403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.165 [2024-07-26 11:16:46.542422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:27.165 [2024-07-26 11:16:46.551663] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:27.165 [2024-07-26 11:16:46.551883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.165 [2024-07-26 11:16:46.551903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:27.165 [2024-07-26 11:16:46.561162] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:27.165 [2024-07-26 11:16:46.561384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.165 [2024-07-26 11:16:46.561402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:27.165 [2024-07-26 11:16:46.570690] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:27.165 [2024-07-26 11:16:46.570908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.165 [2024-07-26 11:16:46.570925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:27.165 [2024-07-26 11:16:46.580399] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:27.165 [2024-07-26 11:16:46.580624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.165 [2024-07-26 11:16:46.580644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:27.165 [2024-07-26 11:16:46.589967] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:27.165 [2024-07-26 11:16:46.590199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.165 [2024-07-26 11:16:46.590217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:27.165 [2024-07-26 11:16:46.599527] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:27.165 [2024-07-26 11:16:46.599748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.165 [2024-07-26 11:16:46.599767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:27.165 [2024-07-26 11:16:46.608882] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:27.165 [2024-07-26 11:16:46.609100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.165 [2024-07-26 11:16:46.609119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:27.165 [2024-07-26 11:16:46.618379] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:27.165 [2024-07-26 11:16:46.618602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.165 [2024-07-26 11:16:46.618620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:27.165 [2024-07-26 11:16:46.627887] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:27.165 [2024-07-26 11:16:46.628108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.165 [2024-07-26 11:16:46.628127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:27.165 [2024-07-26 11:16:46.637435] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:27.165 [2024-07-26 11:16:46.637654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.165 [2024-07-26 11:16:46.637672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:27.165 [2024-07-26 11:16:46.646894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:27.165 [2024-07-26 11:16:46.647117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.165 [2024-07-26 11:16:46.647135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:27.165 [2024-07-26 11:16:46.656505] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:27.165 [2024-07-26 11:16:46.656730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.165 [2024-07-26 11:16:46.656752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:27.425 [2024-07-26 11:16:46.666306] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:27.425 [2024-07-26 11:16:46.666529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.425 [2024-07-26 11:16:46.666547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:27.425 [2024-07-26 11:16:46.675781] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:27.425 [2024-07-26 11:16:46.676006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.425 [2024-07-26 11:16:46.676025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:27.425 [2024-07-26 11:16:46.685260] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:27.425 [2024-07-26 11:16:46.685484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.425 [2024-07-26 11:16:46.685503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:27.425 [2024-07-26 11:16:46.694782] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:27.425 [2024-07-26 11:16:46.695000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.425 [2024-07-26 11:16:46.695018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:27.425 [2024-07-26 11:16:46.704445] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:27.425 [2024-07-26 11:16:46.704668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.425 [2024-07-26 11:16:46.704688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:27.425 [2024-07-26 11:16:46.714059] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:27.425 [2024-07-26 11:16:46.714280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.425 [2024-07-26 11:16:46.714299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:27.425 [2024-07-26 11:16:46.723581] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:27.425 [2024-07-26 11:16:46.723802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.425 [2024-07-26 11:16:46.723820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:27.425 [2024-07-26 11:16:46.733154] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:27.425 [2024-07-26 11:16:46.733376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.425 [2024-07-26 11:16:46.733395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:27.425 [2024-07-26 11:16:46.742646] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:27.425 [2024-07-26 11:16:46.742874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.425 [2024-07-26 11:16:46.742892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:27.425 [2024-07-26 11:16:46.752148] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f1e420) with pdu=0x2000190fb480 00:28:27.426 [2024-07-26 11:16:46.752370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:27.426 [2024-07-26 11:16:46.752389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:27.426 00:28:27.426 Latency(us) 00:28:27.426 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:27.426 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:27.426 nvme0n1 : 2.00 26144.78 102.13 0.00 0.00 4887.40 3376.53 24846.69 00:28:27.426 =================================================================================================================== 00:28:27.426 Total : 26144.78 102.13 0.00 0.00 4887.40 3376.53 24846.69 00:28:27.426 0 00:28:27.426 11:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:27.426 11:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:27.426 11:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:27.426 | .driver_specific 00:28:27.426 | .nvme_error 00:28:27.426 | .status_code 00:28:27.426 | .command_transient_transport_error' 00:28:27.426 11:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:27.685 11:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 205 > 0 )) 00:28:27.685 11:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1600788 00:28:27.685 11:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1600788 ']' 00:28:27.685 11:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1600788 00:28:27.685 11:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:27.685 11:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:27.685 11:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1600788 00:28:27.685 11:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:27.685 11:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:27.685 11:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1600788' 00:28:27.685 killing process with pid 1600788 00:28:27.685 11:16:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1600788 00:28:27.685 Received shutdown signal, test time was about 2.000000 seconds 00:28:27.685 00:28:27.685 Latency(us) 00:28:27.685 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:27.685 =================================================================================================================== 00:28:27.685 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:27.685 11:16:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1600788 00:28:27.944 11:16:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:27.944 11:16:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:27.944 11:16:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:27.944 11:16:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:27.944 11:16:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:27.944 11:16:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1601471 00:28:27.944 11:16:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1601471 /var/tmp/bperf.sock 00:28:27.944 11:16:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:27.945 11:16:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1601471 ']' 00:28:27.945 11:16:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:27.945 11:16:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:27.945 11:16:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:27.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:27.945 11:16:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:27.945 11:16:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:27.945 [2024-07-26 11:16:47.228993] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:27.945 [2024-07-26 11:16:47.229041] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1601471 ] 00:28:27.945 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:27.945 Zero copy mechanism will not be used. 00:28:27.945 EAL: No free 2048 kB hugepages reported on node 1 00:28:27.945 [2024-07-26 11:16:47.282950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.945 [2024-07-26 11:16:47.363737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:28.882 11:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:28.882 11:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:28.882 11:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:28.882 11:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:28.882 11:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:28.882 11:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.882 11:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:28.882 11:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.882 11:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:28.882 11:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:29.141 nvme0n1 00:28:29.141 11:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:29.141 11:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.141 11:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:29.141 11:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.141 11:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:29.141 11:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:29.400 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:29.400 Zero copy mechanism will not be used. 00:28:29.400 Running I/O for 2 seconds... 00:28:29.400 [2024-07-26 11:16:48.696109] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:29.400 [2024-07-26 11:16:48.696778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.400 [2024-07-26 11:16:48.696807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.400 [2024-07-26 11:16:48.716906] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:29.400 [2024-07-26 11:16:48.717383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.400 [2024-07-26 11:16:48.717407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.400 [2024-07-26 11:16:48.740275] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:29.400 [2024-07-26 11:16:48.740960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.400 [2024-07-26 11:16:48.740982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.400 [2024-07-26 11:16:48.762783] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:29.400 [2024-07-26 11:16:48.763766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.400 [2024-07-26 11:16:48.763787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.400 [2024-07-26 11:16:48.784532] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:29.400 [2024-07-26 11:16:48.785226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.400 [2024-07-26 11:16:48.785247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.400 [2024-07-26 11:16:48.808311] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:29.400 [2024-07-26 11:16:48.809013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.400 [2024-07-26 11:16:48.809034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.400 [2024-07-26 11:16:48.833134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:29.400 [2024-07-26 11:16:48.834004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.400 [2024-07-26 11:16:48.834028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.400 [2024-07-26 11:16:48.856805] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:29.400 [2024-07-26 11:16:48.857430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.400 [2024-07-26 11:16:48.857452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.400 [2024-07-26 11:16:48.882154] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:29.400 [2024-07-26 11:16:48.882806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.400 [2024-07-26 11:16:48.882826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.660 [2024-07-26 11:16:48.907970] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:29.660 [2024-07-26 11:16:48.908930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.660 [2024-07-26 11:16:48.908950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.660 [2024-07-26 11:16:48.931506] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:29.660 [2024-07-26 11:16:48.932001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.660 [2024-07-26 11:16:48.932021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.660 [2024-07-26 11:16:48.953960] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:29.660 [2024-07-26 11:16:48.955026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.660 [2024-07-26 11:16:48.955050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.660 [2024-07-26 11:16:48.976332] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:29.660 [2024-07-26 11:16:48.977001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.660 [2024-07-26 11:16:48.977020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.660 [2024-07-26 11:16:49.000185] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:29.660 [2024-07-26 11:16:49.000967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.660 [2024-07-26 11:16:49.000986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.660 [2024-07-26 11:16:49.024883] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:29.660 [2024-07-26 11:16:49.025682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.660 [2024-07-26 11:16:49.025702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.660 [2024-07-26 11:16:49.048906] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:29.660 [2024-07-26 11:16:49.049459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.660 [2024-07-26 11:16:49.049478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.660 [2024-07-26 11:16:49.073035] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:29.660 [2024-07-26 11:16:49.073812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.660 [2024-07-26 11:16:49.073831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.660 [2024-07-26 11:16:49.106759] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:29.660 [2024-07-26 11:16:49.107909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.660 [2024-07-26 11:16:49.107929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.660 [2024-07-26 11:16:49.132693] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:29.660 [2024-07-26 11:16:49.133592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.660 [2024-07-26 11:16:49.133612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.920 [2024-07-26 11:16:49.158047] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:29.920 [2024-07-26 11:16:49.158654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.920 [2024-07-26 11:16:49.158674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.920 [2024-07-26 11:16:49.182679] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:29.920 [2024-07-26 11:16:49.183373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.920 [2024-07-26 11:16:49.183392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.920 [2024-07-26 11:16:49.204576] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:29.920 [2024-07-26 11:16:49.205262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.920 [2024-07-26 11:16:49.205282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.920 [2024-07-26 11:16:49.227839] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:29.920 [2024-07-26 11:16:49.228439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.921 [2024-07-26 11:16:49.228460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.921 [2024-07-26 11:16:49.253106] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:29.921 [2024-07-26 11:16:49.253805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.921 [2024-07-26 11:16:49.253825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.921 [2024-07-26 11:16:49.277348] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:29.921 [2024-07-26 11:16:49.278128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.921 [2024-07-26 11:16:49.278149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.921 [2024-07-26 11:16:49.301177] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:29.921 [2024-07-26 11:16:49.301946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.921 [2024-07-26 11:16:49.301965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:29.921 [2024-07-26 11:16:49.324442] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:29.921 [2024-07-26 11:16:49.325131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.921 [2024-07-26 11:16:49.325151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:29.921 [2024-07-26 11:16:49.354664] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:29.921 [2024-07-26 11:16:49.355057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.921 [2024-07-26 11:16:49.355076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:29.921 [2024-07-26 11:16:49.380530] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:29.921 [2024-07-26 11:16:49.381314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.921 [2024-07-26 11:16:49.381334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:29.921 [2024-07-26 11:16:49.403841] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:29.921 [2024-07-26 11:16:49.404524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:29.921 [2024-07-26 11:16:49.404544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.181 [2024-07-26 11:16:49.429197] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:30.181 [2024-07-26 11:16:49.430064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.181 [2024-07-26 11:16:49.430084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.181 [2024-07-26 11:16:49.454320] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:30.181 [2024-07-26 11:16:49.454908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.181 [2024-07-26 11:16:49.454927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.181 [2024-07-26 11:16:49.479119] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:30.181 [2024-07-26 11:16:49.479571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.181 [2024-07-26 11:16:49.479596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.181 [2024-07-26 11:16:49.503478] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:30.181 [2024-07-26 11:16:49.504270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.181 [2024-07-26 11:16:49.504290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.181 [2024-07-26 11:16:49.529071] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:30.181 [2024-07-26 11:16:49.530026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.181 [2024-07-26 11:16:49.530049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.181 [2024-07-26 11:16:49.554880] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:30.181 [2024-07-26 11:16:49.555637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.181 [2024-07-26 11:16:49.555657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.181 [2024-07-26 11:16:49.587604] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:30.181 [2024-07-26 11:16:49.588479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.181 [2024-07-26 11:16:49.588498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.181 [2024-07-26 11:16:49.612657] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:30.181 [2024-07-26 11:16:49.613266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.181 [2024-07-26 11:16:49.613285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.181 [2024-07-26 11:16:49.637886] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:30.181 [2024-07-26 11:16:49.638676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.181 [2024-07-26 11:16:49.638696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.181 [2024-07-26 11:16:49.663371] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:30.181 [2024-07-26 11:16:49.664233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.181 [2024-07-26 11:16:49.664252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.441 [2024-07-26 11:16:49.686670] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:30.441 [2024-07-26 11:16:49.687548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.441 [2024-07-26 11:16:49.687568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.441 [2024-07-26 11:16:49.711824] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:30.441 [2024-07-26 11:16:49.712607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.441 [2024-07-26 11:16:49.712628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.441 [2024-07-26 11:16:49.735631] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:30.441 [2024-07-26 11:16:49.736425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.441 [2024-07-26 11:16:49.736444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.441 [2024-07-26 11:16:49.760253] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:30.441 [2024-07-26 11:16:49.761109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.441 [2024-07-26 11:16:49.761128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.441 [2024-07-26 11:16:49.784766] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:30.441 [2024-07-26 11:16:49.785600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.441 [2024-07-26 11:16:49.785619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.441 [2024-07-26 11:16:49.809861] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:30.441 [2024-07-26 11:16:49.810580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.441 [2024-07-26 11:16:49.810600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.441 [2024-07-26 11:16:49.836409] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:30.441 [2024-07-26 11:16:49.836880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.441 [2024-07-26 11:16:49.836898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.441 [2024-07-26 11:16:49.860342] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:30.441 [2024-07-26 11:16:49.860826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.441 [2024-07-26 11:16:49.860846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.441 [2024-07-26 11:16:49.885838] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:30.441 [2024-07-26 11:16:49.886443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.441 [2024-07-26 11:16:49.886462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.441 [2024-07-26 11:16:49.910488] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:30.441 [2024-07-26 11:16:49.910958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.441 [2024-07-26 11:16:49.910977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.701 [2024-07-26 11:16:49.936954] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:30.701 [2024-07-26 11:16:49.937660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.701 [2024-07-26 11:16:49.937680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.701 [2024-07-26 11:16:49.963549] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:30.701 [2024-07-26 11:16:49.963917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.701 [2024-07-26 11:16:49.963937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.701 [2024-07-26 11:16:49.988539] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:30.701 [2024-07-26 11:16:49.989417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.701 [2024-07-26 11:16:49.989437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.701 [2024-07-26 11:16:50.012535] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:30.701 [2024-07-26 11:16:50.013330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.701 [2024-07-26 11:16:50.013350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.701 [2024-07-26 11:16:50.031739] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:30.701 [2024-07-26 11:16:50.032353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.701 [2024-07-26 11:16:50.032375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.701 [2024-07-26 11:16:50.054624] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:30.701 [2024-07-26 11:16:50.055224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.702 [2024-07-26 11:16:50.055244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.702 [2024-07-26 11:16:50.076229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:30.702 [2024-07-26 11:16:50.077028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.702 [2024-07-26 11:16:50.077054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.702 [2024-07-26 11:16:50.098864] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:30.702 [2024-07-26 11:16:50.099454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.702 [2024-07-26 11:16:50.099477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.702 [2024-07-26 11:16:50.122175] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:30.702 [2024-07-26 11:16:50.122860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.702 [2024-07-26 11:16:50.122885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.702 [2024-07-26 11:16:50.145288] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:30.702 [2024-07-26 11:16:50.145884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.702 [2024-07-26 11:16:50.145905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.702 [2024-07-26 11:16:50.167997] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:30.702 [2024-07-26 11:16:50.168765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.702 [2024-07-26 11:16:50.168785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.702 [2024-07-26 11:16:50.192203] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:30.702 [2024-07-26 11:16:50.192845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.702 [2024-07-26 11:16:50.192869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.963 [2024-07-26 11:16:50.215327] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:30.964 [2024-07-26 11:16:50.216106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.964 [2024-07-26 11:16:50.216128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.964 [2024-07-26 11:16:50.238160] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:30.964 [2024-07-26 11:16:50.238758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.964 [2024-07-26 11:16:50.238778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.964 [2024-07-26 11:16:50.260893] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:30.964 [2024-07-26 11:16:50.261592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.964 [2024-07-26 11:16:50.261611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.964 [2024-07-26 11:16:50.283860] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:30.964 [2024-07-26 11:16:50.284647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.964 [2024-07-26 11:16:50.284667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.964 [2024-07-26 11:16:50.308254] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:30.964 [2024-07-26 11:16:50.308981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.964 [2024-07-26 11:16:50.309000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.964 [2024-07-26 11:16:50.332686] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:30.964 [2024-07-26 11:16:50.333391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.964 [2024-07-26 11:16:50.333411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.964 [2024-07-26 11:16:50.358367] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:30.964 [2024-07-26 11:16:50.359163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.964 [2024-07-26 11:16:50.359183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:30.964 [2024-07-26 11:16:50.384630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:30.964 [2024-07-26 11:16:50.385473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.964 [2024-07-26 11:16:50.385493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:30.964 [2024-07-26 11:16:50.408363] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:30.964 [2024-07-26 11:16:50.409317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.964 [2024-07-26 11:16:50.409338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:30.964 [2024-07-26 11:16:50.432596] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:30.964 [2024-07-26 11:16:50.433444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.964 [2024-07-26 11:16:50.433464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:30.964 [2024-07-26 11:16:50.456401] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:30.964 [2024-07-26 11:16:50.457094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:30.964 [2024-07-26 11:16:50.457115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.223 [2024-07-26 11:16:50.481509] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:31.223 [2024-07-26 11:16:50.482281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.223 [2024-07-26 11:16:50.482301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.223 [2024-07-26 11:16:50.504597] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:31.223 [2024-07-26 11:16:50.505293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.223 [2024-07-26 11:16:50.505313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.223 [2024-07-26 11:16:50.529636] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:31.224 [2024-07-26 11:16:50.530384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.224 [2024-07-26 11:16:50.530408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.224 [2024-07-26 11:16:50.552917] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:31.224 [2024-07-26 11:16:50.553703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.224 [2024-07-26 11:16:50.553723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.224 [2024-07-26 11:16:50.575953] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:31.224 [2024-07-26 11:16:50.576739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.224 [2024-07-26 11:16:50.576759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.224 [2024-07-26 11:16:50.599663] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:31.224 [2024-07-26 11:16:50.600521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.224 [2024-07-26 11:16:50.600541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.224 [2024-07-26 11:16:50.623662] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:31.224 [2024-07-26 11:16:50.624310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.224 [2024-07-26 11:16:50.624329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.224 [2024-07-26 11:16:50.647909] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f200a0) with pdu=0x2000190fef90 00:28:31.224 [2024-07-26 11:16:50.648625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:31.224 [2024-07-26 11:16:50.648645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.224 00:28:31.224 Latency(us) 00:28:31.224 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:31.224 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:31.224 nvme0n1 : 2.01 1256.80 157.10 0.00 0.00 12693.46 8662.15 39207.62 00:28:31.224 =================================================================================================================== 00:28:31.224 Total : 1256.80 157.10 0.00 0.00 12693.46 8662.15 39207.62 00:28:31.224 0 00:28:31.224 11:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:31.224 11:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:31.224 11:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:31.224 | .driver_specific 00:28:31.224 | .nvme_error 00:28:31.224 | .status_code 00:28:31.224 | .command_transient_transport_error' 00:28:31.224 11:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:31.483 11:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 81 > 0 )) 00:28:31.483 11:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1601471 00:28:31.483 11:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1601471 ']' 00:28:31.483 11:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1601471 00:28:31.483 11:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:31.483 11:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:31.483 11:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1601471 00:28:31.483 11:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:31.483 11:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:31.483 11:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1601471' 00:28:31.483 killing process with pid 1601471 00:28:31.483 11:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1601471 00:28:31.483 Received shutdown signal, test time was about 2.000000 seconds 00:28:31.483 00:28:31.483 Latency(us) 00:28:31.483 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:31.483 =================================================================================================================== 00:28:31.483 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:31.483 11:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1601471 00:28:31.743 11:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1599302 00:28:31.743 11:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1599302 ']' 00:28:31.743 11:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1599302 00:28:31.743 11:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:31.743 11:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:31.743 11:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1599302 00:28:31.743 11:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:31.743 11:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:31.743 11:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1599302' 00:28:31.744 killing process with pid 1599302 00:28:31.744 11:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1599302 00:28:31.744 11:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1599302 00:28:32.004 00:28:32.004 real 0m17.016s 00:28:32.004 user 0m33.797s 00:28:32.004 sys 0m3.413s 00:28:32.004 11:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:32.004 11:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:32.004 ************************************ 00:28:32.004 END TEST nvmf_digest_error 00:28:32.004 ************************************ 00:28:32.004 11:16:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:32.004 11:16:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:32.004 11:16:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:32.004 11:16:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:28:32.004 11:16:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:32.004 11:16:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:28:32.004 11:16:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:32.004 11:16:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:32.004 rmmod nvme_tcp 00:28:32.004 rmmod nvme_fabrics 00:28:32.004 rmmod nvme_keyring 00:28:32.004 11:16:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:32.004 11:16:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:28:32.004 11:16:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:28:32.004 11:16:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1599302 ']' 00:28:32.004 11:16:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1599302 00:28:32.004 11:16:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 1599302 ']' 00:28:32.004 11:16:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 1599302 00:28:32.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1599302) - No such process 00:28:32.004 11:16:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 1599302 is not found' 00:28:32.004 Process with pid 1599302 is not found 00:28:32.004 11:16:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:32.004 11:16:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:32.004 11:16:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:32.004 11:16:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:32.004 11:16:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:32.004 11:16:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:32.004 11:16:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:32.004 11:16:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:34.545 11:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:34.545 00:28:34.545 real 0m41.132s 00:28:34.545 user 1m8.738s 00:28:34.545 sys 0m10.510s 00:28:34.545 11:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:34.545 11:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:34.545 ************************************ 00:28:34.545 END TEST nvmf_digest 00:28:34.545 ************************************ 00:28:34.545 11:16:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:28:34.545 11:16:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:28:34.545 11:16:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:28:34.545 11:16:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:34.545 11:16:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:34.545 11:16:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:34.545 11:16:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.545 ************************************ 00:28:34.545 START TEST nvmf_bdevperf 00:28:34.545 ************************************ 00:28:34.545 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:34.545 * Looking for test storage... 00:28:34.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:34.545 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:34.545 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:34.545 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:34.545 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:34.545 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:34.545 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:34.545 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:34.545 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:34.545 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:34.545 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:34.545 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:34.545 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:34.545 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:34.545 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:34.545 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:34.545 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:34.545 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:34.545 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:34.545 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:34.546 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:34.546 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:34.546 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:34.546 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.546 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.546 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.546 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:34.546 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.546 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:28:34.546 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:34.546 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:34.546 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:34.546 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:34.546 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:34.546 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:34.546 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:34.546 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:34.546 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:34.546 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:34.546 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:34.546 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:34.546 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:34.546 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:34.546 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:34.546 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:34.546 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:34.546 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:34.546 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:34.546 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:34.546 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:34.546 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:34.546 11:16:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:39.827 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:39.827 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:39.827 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:39.827 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:39.827 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:39.827 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:39.827 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:39.827 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:28:39.827 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:39.827 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:28:39.827 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:28:39.827 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:28:39.827 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:39.828 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:39.828 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:39.828 Found net devices under 0000:86:00.0: cvl_0_0 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:39.828 Found net devices under 0000:86:00.1: cvl_0_1 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:39.828 11:16:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:39.828 11:16:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:39.828 11:16:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:39.828 11:16:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:39.828 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:39.828 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:28:39.828 00:28:39.828 --- 10.0.0.2 ping statistics --- 00:28:39.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.828 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:28:39.828 11:16:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:39.828 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:39.828 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.420 ms 00:28:39.828 00:28:39.828 --- 10.0.0.1 ping statistics --- 00:28:39.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.828 rtt min/avg/max/mdev = 0.420/0.420/0.420/0.000 ms 00:28:39.828 11:16:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:39.828 11:16:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:28:39.828 11:16:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:39.828 11:16:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:39.828 11:16:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:39.828 11:16:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:39.828 11:16:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:39.828 11:16:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:39.828 11:16:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:39.828 11:16:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:39.828 11:16:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:39.828 11:16:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:39.828 11:16:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:39.828 11:16:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:39.828 11:16:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1605489 00:28:39.828 11:16:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1605489 00:28:39.828 11:16:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1605489 ']' 00:28:39.828 11:16:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:39.828 11:16:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:39.828 11:16:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:39.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:39.828 11:16:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:39.828 11:16:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:39.829 11:16:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:39.829 [2024-07-26 11:16:59.178880] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:39.829 [2024-07-26 11:16:59.178923] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:39.829 EAL: No free 2048 kB hugepages reported on node 1 00:28:39.829 [2024-07-26 11:16:59.234933] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:39.829 [2024-07-26 11:16:59.315326] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:39.829 [2024-07-26 11:16:59.315362] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:39.829 [2024-07-26 11:16:59.315369] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:39.829 [2024-07-26 11:16:59.315375] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:39.829 [2024-07-26 11:16:59.315380] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:39.829 [2024-07-26 11:16:59.315416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:39.829 [2024-07-26 11:16:59.315503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:39.829 [2024-07-26 11:16:59.315504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:40.767 11:16:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:40.767 11:16:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:28:40.767 11:16:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:40.767 11:16:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:40.767 11:16:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:40.767 11:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:40.767 11:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:40.767 11:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.767 11:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:40.767 [2024-07-26 11:17:00.031048] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:40.767 11:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.767 11:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:40.767 11:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.767 11:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:40.767 Malloc0 00:28:40.767 11:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.767 11:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:40.767 11:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.767 11:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:40.767 11:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.767 11:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:40.767 11:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.767 11:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:40.767 11:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.767 11:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:40.767 11:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.767 11:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:40.767 [2024-07-26 11:17:00.085494] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:40.767 11:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.767 11:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:40.767 11:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:40.767 11:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:28:40.767 11:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:28:40.767 11:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:40.767 11:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:40.767 { 00:28:40.767 "params": { 00:28:40.767 "name": "Nvme$subsystem", 00:28:40.767 "trtype": "$TEST_TRANSPORT", 00:28:40.767 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:40.767 "adrfam": "ipv4", 00:28:40.767 "trsvcid": "$NVMF_PORT", 00:28:40.767 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:40.767 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:40.767 "hdgst": ${hdgst:-false}, 00:28:40.767 "ddgst": ${ddgst:-false} 00:28:40.767 }, 00:28:40.767 "method": "bdev_nvme_attach_controller" 00:28:40.767 } 00:28:40.767 EOF 00:28:40.767 )") 00:28:40.767 11:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:28:40.767 11:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:28:40.767 11:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:28:40.767 11:17:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:40.767 "params": { 00:28:40.767 "name": "Nvme1", 00:28:40.767 "trtype": "tcp", 00:28:40.767 "traddr": "10.0.0.2", 00:28:40.767 "adrfam": "ipv4", 00:28:40.767 "trsvcid": "4420", 00:28:40.767 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:40.767 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:40.767 "hdgst": false, 00:28:40.767 "ddgst": false 00:28:40.767 }, 00:28:40.767 "method": "bdev_nvme_attach_controller" 00:28:40.767 }' 00:28:40.767 [2024-07-26 11:17:00.134835] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:40.767 [2024-07-26 11:17:00.134877] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1605734 ] 00:28:40.767 EAL: No free 2048 kB hugepages reported on node 1 00:28:40.767 [2024-07-26 11:17:00.187945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.027 [2024-07-26 11:17:00.263225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:41.027 Running I/O for 1 seconds... 00:28:41.965 00:28:41.965 Latency(us) 00:28:41.965 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:41.965 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:41.965 Verification LBA range: start 0x0 length 0x4000 00:28:41.965 Nvme1n1 : 1.01 10948.37 42.77 0.00 0.00 11646.57 2478.97 31229.33 00:28:41.965 =================================================================================================================== 00:28:41.965 Total : 10948.37 42.77 0.00 0.00 11646.57 2478.97 31229.33 00:28:42.226 11:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1605967 00:28:42.226 11:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:42.226 11:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:42.226 11:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:42.226 11:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:28:42.226 11:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:28:42.226 11:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:42.226 11:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:42.226 { 00:28:42.226 "params": { 00:28:42.226 "name": "Nvme$subsystem", 00:28:42.226 "trtype": "$TEST_TRANSPORT", 00:28:42.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:42.226 "adrfam": "ipv4", 00:28:42.226 "trsvcid": "$NVMF_PORT", 00:28:42.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:42.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:42.226 "hdgst": ${hdgst:-false}, 00:28:42.226 "ddgst": ${ddgst:-false} 00:28:42.226 }, 00:28:42.226 "method": "bdev_nvme_attach_controller" 00:28:42.226 } 00:28:42.226 EOF 00:28:42.226 )") 00:28:42.226 11:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:28:42.226 11:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:28:42.226 11:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:28:42.226 11:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:42.226 "params": { 00:28:42.226 "name": "Nvme1", 00:28:42.226 "trtype": "tcp", 00:28:42.226 "traddr": "10.0.0.2", 00:28:42.226 "adrfam": "ipv4", 00:28:42.226 "trsvcid": "4420", 00:28:42.226 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:42.226 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:42.226 "hdgst": false, 00:28:42.226 "ddgst": false 00:28:42.226 }, 00:28:42.226 "method": "bdev_nvme_attach_controller" 00:28:42.226 }' 00:28:42.226 [2024-07-26 11:17:01.662917] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:42.226 [2024-07-26 11:17:01.662965] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1605967 ] 00:28:42.226 EAL: No free 2048 kB hugepages reported on node 1 00:28:42.226 [2024-07-26 11:17:01.717552] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.486 [2024-07-26 11:17:01.789247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:42.746 Running I/O for 15 seconds... 00:28:45.312 11:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1605489 00:28:45.312 11:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:45.312 [2024-07-26 11:17:04.631444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.312 [2024-07-26 11:17:04.631486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.312 [2024-07-26 11:17:04.631505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.312 [2024-07-26 11:17:04.631515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.312 [2024-07-26 11:17:04.631525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.312 [2024-07-26 11:17:04.631532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.312 [2024-07-26 11:17:04.631541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.312 [2024-07-26 11:17:04.631548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.312 [2024-07-26 11:17:04.631557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.312 [2024-07-26 11:17:04.631565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.312 [2024-07-26 11:17:04.631576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.313 [2024-07-26 11:17:04.631584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.313 [2024-07-26 11:17:04.631594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.313 [2024-07-26 11:17:04.631602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.313 [2024-07-26 11:17:04.631612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.313 [2024-07-26 11:17:04.631623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.313 [2024-07-26 11:17:04.631632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.313 [2024-07-26 11:17:04.631641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.313 [2024-07-26 11:17:04.631650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.313 [2024-07-26 11:17:04.631659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.313 [2024-07-26 11:17:04.631669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.313 [2024-07-26 11:17:04.631680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.313 [2024-07-26 11:17:04.631690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.313 [2024-07-26 11:17:04.631698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.313 [2024-07-26 11:17:04.631708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.313 [2024-07-26 11:17:04.631715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.313 [2024-07-26 11:17:04.631726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.313 [2024-07-26 11:17:04.631732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.313 [2024-07-26 11:17:04.631741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.313 [2024-07-26 11:17:04.631747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.313 [2024-07-26 11:17:04.631755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.313 [2024-07-26 11:17:04.631762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.313 [2024-07-26 11:17:04.631771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.313 [2024-07-26 11:17:04.631781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.313 [2024-07-26 11:17:04.631790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.313 [2024-07-26 11:17:04.631797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.313 [2024-07-26 11:17:04.631805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.313 [2024-07-26 11:17:04.631811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.313 [2024-07-26 11:17:04.631819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.313 [2024-07-26 11:17:04.631828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.313 [2024-07-26 11:17:04.631836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.313 [2024-07-26 11:17:04.631844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.313 [2024-07-26 11:17:04.631852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.313 [2024-07-26 11:17:04.631860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.313 [2024-07-26 11:17:04.631869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.313 [2024-07-26 11:17:04.631877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.313 [2024-07-26 11:17:04.631885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.313 [2024-07-26 11:17:04.631892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.313 [2024-07-26 11:17:04.631900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.313 [2024-07-26 11:17:04.631906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.313 [2024-07-26 11:17:04.631915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.313 [2024-07-26 11:17:04.631923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.313 [2024-07-26 11:17:04.631931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.313 [2024-07-26 11:17:04.631938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.313 [2024-07-26 11:17:04.631946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.313 [2024-07-26 11:17:04.631953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.313 [2024-07-26 11:17:04.631961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.313 [2024-07-26 11:17:04.631968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.313 [2024-07-26 11:17:04.631978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.313 [2024-07-26 11:17:04.631984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.313 [2024-07-26 11:17:04.631992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.313 [2024-07-26 11:17:04.631999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.313 [2024-07-26 11:17:04.632007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.313 [2024-07-26 11:17:04.632013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.313 [2024-07-26 11:17:04.632021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:95960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.313 [2024-07-26 11:17:04.632028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.313 [2024-07-26 11:17:04.632036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.313 [2024-07-26 11:17:04.632046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.313 [2024-07-26 11:17:04.632054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.313 [2024-07-26 11:17:04.632060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.313 [2024-07-26 11:17:04.632069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.313 [2024-07-26 11:17:04.632076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.313 [2024-07-26 11:17:04.632084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:95992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.313 [2024-07-26 11:17:04.632090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.313 [2024-07-26 11:17:04.632099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.313 [2024-07-26 11:17:04.632105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.313 [2024-07-26 11:17:04.632113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.313 [2024-07-26 11:17:04.632121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.314 [2024-07-26 11:17:04.632129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.314 [2024-07-26 11:17:04.632136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.314 [2024-07-26 11:17:04.632144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.314 [2024-07-26 11:17:04.632151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.314 [2024-07-26 11:17:04.632159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.314 [2024-07-26 11:17:04.632166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.314 [2024-07-26 11:17:04.632174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.314 [2024-07-26 11:17:04.632180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.314 [2024-07-26 11:17:04.632188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.314 [2024-07-26 11:17:04.632194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.314 [2024-07-26 11:17:04.632203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.314 [2024-07-26 11:17:04.632209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.314 [2024-07-26 11:17:04.632217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.314 [2024-07-26 11:17:04.632224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.314 [2024-07-26 11:17:04.632232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.314 [2024-07-26 11:17:04.632238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.314 [2024-07-26 11:17:04.632246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.314 [2024-07-26 11:17:04.632253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.314 [2024-07-26 11:17:04.632260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.314 [2024-07-26 11:17:04.632267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.314 [2024-07-26 11:17:04.632275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.314 [2024-07-26 11:17:04.632282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.314 [2024-07-26 11:17:04.632290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.314 [2024-07-26 11:17:04.632296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.314 [2024-07-26 11:17:04.632306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.314 [2024-07-26 11:17:04.632312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.314 [2024-07-26 11:17:04.632320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.314 [2024-07-26 11:17:04.632327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.314 [2024-07-26 11:17:04.632335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.314 [2024-07-26 11:17:04.632341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.314 [2024-07-26 11:17:04.632350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.314 [2024-07-26 11:17:04.632356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.314 [2024-07-26 11:17:04.632364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.314 [2024-07-26 11:17:04.632371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.314 [2024-07-26 11:17:04.632378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.314 [2024-07-26 11:17:04.632386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.314 [2024-07-26 11:17:04.632394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.314 [2024-07-26 11:17:04.632400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.314 [2024-07-26 11:17:04.632408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.314 [2024-07-26 11:17:04.632414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.314 [2024-07-26 11:17:04.632422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.314 [2024-07-26 11:17:04.632429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.314 [2024-07-26 11:17:04.632437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.314 [2024-07-26 11:17:04.632444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.314 [2024-07-26 11:17:04.632452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.314 [2024-07-26 11:17:04.632458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.314 [2024-07-26 11:17:04.632467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.314 [2024-07-26 11:17:04.632473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.314 [2024-07-26 11:17:04.632481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.314 [2024-07-26 11:17:04.632489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.314 [2024-07-26 11:17:04.632497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.314 [2024-07-26 11:17:04.632504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.314 [2024-07-26 11:17:04.632512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.314 [2024-07-26 11:17:04.632518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.314 [2024-07-26 11:17:04.632526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.314 [2024-07-26 11:17:04.632533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.314 [2024-07-26 11:17:04.632541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.314 [2024-07-26 11:17:04.632548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.314 [2024-07-26 11:17:04.632557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.314 [2024-07-26 11:17:04.632563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.314 [2024-07-26 11:17:04.632571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.314 [2024-07-26 11:17:04.632578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.314 [2024-07-26 11:17:04.632586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.314 [2024-07-26 11:17:04.632593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.314 [2024-07-26 11:17:04.632602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.314 [2024-07-26 11:17:04.632608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.314 [2024-07-26 11:17:04.632617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.314 [2024-07-26 11:17:04.632623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.314 [2024-07-26 11:17:04.632631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.314 [2024-07-26 11:17:04.632637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.314 [2024-07-26 11:17:04.632645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.314 [2024-07-26 11:17:04.632652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.314 [2024-07-26 11:17:04.632660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.314 [2024-07-26 11:17:04.632667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.314 [2024-07-26 11:17:04.632676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.314 [2024-07-26 11:17:04.632682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.314 [2024-07-26 11:17:04.632690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.314 [2024-07-26 11:17:04.632697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.315 [2024-07-26 11:17:04.632706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.315 [2024-07-26 11:17:04.632712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.315 [2024-07-26 11:17:04.632721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.315 [2024-07-26 11:17:04.632727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.315 [2024-07-26 11:17:04.632736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.315 [2024-07-26 11:17:04.632742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.315 [2024-07-26 11:17:04.632753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.315 [2024-07-26 11:17:04.632760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.315 [2024-07-26 11:17:04.632769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.315 [2024-07-26 11:17:04.632775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.315 [2024-07-26 11:17:04.632783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.315 [2024-07-26 11:17:04.632789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.315 [2024-07-26 11:17:04.632797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.315 [2024-07-26 11:17:04.632803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.315 [2024-07-26 11:17:04.632812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.315 [2024-07-26 11:17:04.632818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.315 [2024-07-26 11:17:04.632826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.315 [2024-07-26 11:17:04.632833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.315 [2024-07-26 11:17:04.632840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.315 [2024-07-26 11:17:04.632847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.315 [2024-07-26 11:17:04.632856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.315 [2024-07-26 11:17:04.632864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.315 [2024-07-26 11:17:04.632874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.315 [2024-07-26 11:17:04.632881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.315 [2024-07-26 11:17:04.632889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.315 [2024-07-26 11:17:04.632895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.315 [2024-07-26 11:17:04.632903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.315 [2024-07-26 11:17:04.632910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.315 [2024-07-26 11:17:04.632918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.315 [2024-07-26 11:17:04.632925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.315 [2024-07-26 11:17:04.632932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.315 [2024-07-26 11:17:04.632938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.315 [2024-07-26 11:17:04.632946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.315 [2024-07-26 11:17:04.632953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.315 [2024-07-26 11:17:04.632961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.315 [2024-07-26 11:17:04.632968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.315 [2024-07-26 11:17:04.632976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.315 [2024-07-26 11:17:04.632983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.315 [2024-07-26 11:17:04.632992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.315 [2024-07-26 11:17:04.632998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.315 [2024-07-26 11:17:04.633006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.315 [2024-07-26 11:17:04.633012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.315 [2024-07-26 11:17:04.633021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.315 [2024-07-26 11:17:04.633028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.315 [2024-07-26 11:17:04.633036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.315 [2024-07-26 11:17:04.633046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.315 [2024-07-26 11:17:04.633055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.315 [2024-07-26 11:17:04.633062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.315 [2024-07-26 11:17:04.633070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.315 [2024-07-26 11:17:04.633077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.315 [2024-07-26 11:17:04.633085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.315 [2024-07-26 11:17:04.633092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.315 [2024-07-26 11:17:04.633101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.315 [2024-07-26 11:17:04.633107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.315 [2024-07-26 11:17:04.633116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.315 [2024-07-26 11:17:04.633123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.315 [2024-07-26 11:17:04.633130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.315 [2024-07-26 11:17:04.633138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.315 [2024-07-26 11:17:04.633145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.315 [2024-07-26 11:17:04.633152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.315 [2024-07-26 11:17:04.633160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.315 [2024-07-26 11:17:04.633166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.315 [2024-07-26 11:17:04.633174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.315 [2024-07-26 11:17:04.633180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.315 [2024-07-26 11:17:04.633189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.315 [2024-07-26 11:17:04.633195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.315 [2024-07-26 11:17:04.633204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.315 [2024-07-26 11:17:04.633210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.315 [2024-07-26 11:17:04.633218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.315 [2024-07-26 11:17:04.633224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.315 [2024-07-26 11:17:04.633233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.315 [2024-07-26 11:17:04.633240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.315 [2024-07-26 11:17:04.633250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.315 [2024-07-26 11:17:04.633257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.315 [2024-07-26 11:17:04.633264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.315 [2024-07-26 11:17:04.633271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.315 [2024-07-26 11:17:04.633278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.315 [2024-07-26 11:17:04.633285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.315 [2024-07-26 11:17:04.633294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.316 [2024-07-26 11:17:04.633300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.316 [2024-07-26 11:17:04.633308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.316 [2024-07-26 11:17:04.633314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.316 [2024-07-26 11:17:04.633322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.316 [2024-07-26 11:17:04.633328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.316 [2024-07-26 11:17:04.633336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.316 [2024-07-26 11:17:04.633344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.316 [2024-07-26 11:17:04.633354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.316 [2024-07-26 11:17:04.633360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.316 [2024-07-26 11:17:04.633368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:45.316 [2024-07-26 11:17:04.633374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.316 [2024-07-26 11:17:04.633382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.316 [2024-07-26 11:17:04.633389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.316 [2024-07-26 11:17:04.633396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.316 [2024-07-26 11:17:04.633404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.316 [2024-07-26 11:17:04.633412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.316 [2024-07-26 11:17:04.633418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.316 [2024-07-26 11:17:04.633426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.316 [2024-07-26 11:17:04.633436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.316 [2024-07-26 11:17:04.633443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3cee0 is same with the state(5) to be set 00:28:45.316 [2024-07-26 11:17:04.633451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:45.316 [2024-07-26 11:17:04.633457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:45.316 [2024-07-26 11:17:04.633463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95936 len:8 PRP1 0x0 PRP2 0x0 00:28:45.316 [2024-07-26 11:17:04.633470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:45.316 [2024-07-26 11:17:04.633512] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe3cee0 was disconnected and freed. reset controller. 00:28:45.316 [2024-07-26 11:17:04.636470] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.316 [2024-07-26 11:17:04.636525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.316 [2024-07-26 11:17:04.637369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.316 [2024-07-26 11:17:04.637426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.316 [2024-07-26 11:17:04.637450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.316 [2024-07-26 11:17:04.637987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.316 [2024-07-26 11:17:04.638168] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.316 [2024-07-26 11:17:04.638177] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.316 [2024-07-26 11:17:04.638184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.316 [2024-07-26 11:17:04.642071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.316 [2024-07-26 11:17:04.650446] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.316 [2024-07-26 11:17:04.651186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.316 [2024-07-26 11:17:04.651231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.316 [2024-07-26 11:17:04.651254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.316 [2024-07-26 11:17:04.651835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.316 [2024-07-26 11:17:04.652419] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.316 [2024-07-26 11:17:04.652430] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.316 [2024-07-26 11:17:04.652437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.316 [2024-07-26 11:17:04.655137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.316 [2024-07-26 11:17:04.663258] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.316 [2024-07-26 11:17:04.663926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.316 [2024-07-26 11:17:04.663970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.316 [2024-07-26 11:17:04.663992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.316 [2024-07-26 11:17:04.664418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.316 [2024-07-26 11:17:04.664593] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.316 [2024-07-26 11:17:04.664602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.316 [2024-07-26 11:17:04.664608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.316 [2024-07-26 11:17:04.667259] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.316 [2024-07-26 11:17:04.676077] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.316 [2024-07-26 11:17:04.676816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.316 [2024-07-26 11:17:04.676861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.316 [2024-07-26 11:17:04.676883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.316 [2024-07-26 11:17:04.677293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.316 [2024-07-26 11:17:04.677468] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.316 [2024-07-26 11:17:04.677478] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.316 [2024-07-26 11:17:04.677485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.316 [2024-07-26 11:17:04.680164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.316 [2024-07-26 11:17:04.688962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.316 [2024-07-26 11:17:04.689701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.316 [2024-07-26 11:17:04.689745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.316 [2024-07-26 11:17:04.689766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.316 [2024-07-26 11:17:04.690154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.316 [2024-07-26 11:17:04.690328] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.316 [2024-07-26 11:17:04.690338] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.316 [2024-07-26 11:17:04.690344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.316 [2024-07-26 11:17:04.693001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.316 [2024-07-26 11:17:04.701817] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.316 [2024-07-26 11:17:04.702521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.316 [2024-07-26 11:17:04.702563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.316 [2024-07-26 11:17:04.702585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.316 [2024-07-26 11:17:04.703175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.316 [2024-07-26 11:17:04.703586] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.316 [2024-07-26 11:17:04.703596] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.316 [2024-07-26 11:17:04.703606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.316 [2024-07-26 11:17:04.706349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.316 [2024-07-26 11:17:04.714702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.316 [2024-07-26 11:17:04.715434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.316 [2024-07-26 11:17:04.715477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.316 [2024-07-26 11:17:04.715498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.316 [2024-07-26 11:17:04.715685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.316 [2024-07-26 11:17:04.715848] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.317 [2024-07-26 11:17:04.715858] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.317 [2024-07-26 11:17:04.715864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.317 [2024-07-26 11:17:04.718554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.317 [2024-07-26 11:17:04.727669] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.317 [2024-07-26 11:17:04.728327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.317 [2024-07-26 11:17:04.728370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.317 [2024-07-26 11:17:04.728392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.317 [2024-07-26 11:17:04.728952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.317 [2024-07-26 11:17:04.729215] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.317 [2024-07-26 11:17:04.729229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.317 [2024-07-26 11:17:04.729239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.317 [2024-07-26 11:17:04.733290] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.317 [2024-07-26 11:17:04.741044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.317 [2024-07-26 11:17:04.741717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.317 [2024-07-26 11:17:04.741761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.317 [2024-07-26 11:17:04.741783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.317 [2024-07-26 11:17:04.742316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.317 [2024-07-26 11:17:04.742492] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.317 [2024-07-26 11:17:04.742502] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.317 [2024-07-26 11:17:04.742508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.317 [2024-07-26 11:17:04.745210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.317 [2024-07-26 11:17:04.754088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.317 [2024-07-26 11:17:04.754767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.317 [2024-07-26 11:17:04.754818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.317 [2024-07-26 11:17:04.754841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.317 [2024-07-26 11:17:04.755432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.317 [2024-07-26 11:17:04.755701] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.317 [2024-07-26 11:17:04.755710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.317 [2024-07-26 11:17:04.755717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.317 [2024-07-26 11:17:04.758396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.317 [2024-07-26 11:17:04.767069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.317 [2024-07-26 11:17:04.767770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.317 [2024-07-26 11:17:04.767813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.317 [2024-07-26 11:17:04.767834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.317 [2024-07-26 11:17:04.768425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.317 [2024-07-26 11:17:04.768749] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.317 [2024-07-26 11:17:04.768758] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.317 [2024-07-26 11:17:04.768764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.317 [2024-07-26 11:17:04.771392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.317 [2024-07-26 11:17:04.780036] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.317 [2024-07-26 11:17:04.780683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.317 [2024-07-26 11:17:04.780726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.317 [2024-07-26 11:17:04.780749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.317 [2024-07-26 11:17:04.781339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.317 [2024-07-26 11:17:04.781664] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.317 [2024-07-26 11:17:04.781674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.317 [2024-07-26 11:17:04.781680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.317 [2024-07-26 11:17:04.784349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.317 [2024-07-26 11:17:04.792963] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.317 [2024-07-26 11:17:04.793671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.317 [2024-07-26 11:17:04.793715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.317 [2024-07-26 11:17:04.793736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.317 [2024-07-26 11:17:04.794326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.317 [2024-07-26 11:17:04.794503] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.317 [2024-07-26 11:17:04.794513] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.317 [2024-07-26 11:17:04.794519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.317 [2024-07-26 11:17:04.797170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.578 [2024-07-26 11:17:04.806138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.578 [2024-07-26 11:17:04.806893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.578 [2024-07-26 11:17:04.806936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.578 [2024-07-26 11:17:04.806958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.578 [2024-07-26 11:17:04.807413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.578 [2024-07-26 11:17:04.807587] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.578 [2024-07-26 11:17:04.807597] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.578 [2024-07-26 11:17:04.807603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.578 [2024-07-26 11:17:04.810248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.578 [2024-07-26 11:17:04.819118] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.578 [2024-07-26 11:17:04.819846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.578 [2024-07-26 11:17:04.819888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.578 [2024-07-26 11:17:04.819911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.578 [2024-07-26 11:17:04.820506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.578 [2024-07-26 11:17:04.820823] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.578 [2024-07-26 11:17:04.820832] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.578 [2024-07-26 11:17:04.820839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.578 [2024-07-26 11:17:04.823593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.578 [2024-07-26 11:17:04.832006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.578 [2024-07-26 11:17:04.832738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.578 [2024-07-26 11:17:04.832782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.578 [2024-07-26 11:17:04.832804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.578 [2024-07-26 11:17:04.833398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.578 [2024-07-26 11:17:04.833798] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.578 [2024-07-26 11:17:04.833808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.578 [2024-07-26 11:17:04.833815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.578 [2024-07-26 11:17:04.836450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.578 [2024-07-26 11:17:04.844866] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.578 [2024-07-26 11:17:04.845597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.578 [2024-07-26 11:17:04.845640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.578 [2024-07-26 11:17:04.845662] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.578 [2024-07-26 11:17:04.846254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.578 [2024-07-26 11:17:04.846757] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.578 [2024-07-26 11:17:04.846767] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.578 [2024-07-26 11:17:04.846773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.578 [2024-07-26 11:17:04.849500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.578 [2024-07-26 11:17:04.857703] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.578 [2024-07-26 11:17:04.858429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.578 [2024-07-26 11:17:04.858473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.578 [2024-07-26 11:17:04.858495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.578 [2024-07-26 11:17:04.858899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.578 [2024-07-26 11:17:04.859068] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.578 [2024-07-26 11:17:04.859077] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.578 [2024-07-26 11:17:04.859083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.578 [2024-07-26 11:17:04.861674] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.578 [2024-07-26 11:17:04.870560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.578 [2024-07-26 11:17:04.871291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.578 [2024-07-26 11:17:04.871336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.578 [2024-07-26 11:17:04.871359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.578 [2024-07-26 11:17:04.871909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.578 [2024-07-26 11:17:04.872077] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.578 [2024-07-26 11:17:04.872086] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.579 [2024-07-26 11:17:04.872093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.579 [2024-07-26 11:17:04.874686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.579 [2024-07-26 11:17:04.883418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.579 [2024-07-26 11:17:04.884172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.579 [2024-07-26 11:17:04.884216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.579 [2024-07-26 11:17:04.884245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.579 [2024-07-26 11:17:04.884824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.579 [2024-07-26 11:17:04.885280] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.579 [2024-07-26 11:17:04.885306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.579 [2024-07-26 11:17:04.885316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.579 [2024-07-26 11:17:04.888164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.579 [2024-07-26 11:17:04.896636] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.579 [2024-07-26 11:17:04.897356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.579 [2024-07-26 11:17:04.897376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.579 [2024-07-26 11:17:04.897384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.579 [2024-07-26 11:17:04.897557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.579 [2024-07-26 11:17:04.897749] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.579 [2024-07-26 11:17:04.897759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.579 [2024-07-26 11:17:04.897766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.579 [2024-07-26 11:17:04.900586] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.579 [2024-07-26 11:17:04.909431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.579 [2024-07-26 11:17:04.910065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.579 [2024-07-26 11:17:04.910109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.579 [2024-07-26 11:17:04.910132] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.579 [2024-07-26 11:17:04.910655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.579 [2024-07-26 11:17:04.910910] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.579 [2024-07-26 11:17:04.910923] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.579 [2024-07-26 11:17:04.910932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.579 [2024-07-26 11:17:04.914984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.579 [2024-07-26 11:17:04.922742] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.579 [2024-07-26 11:17:04.923474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.579 [2024-07-26 11:17:04.923519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.579 [2024-07-26 11:17:04.923541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.579 [2024-07-26 11:17:04.923792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.579 [2024-07-26 11:17:04.923961] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.579 [2024-07-26 11:17:04.923974] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.579 [2024-07-26 11:17:04.923980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.579 [2024-07-26 11:17:04.926711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.579 [2024-07-26 11:17:04.935546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.579 [2024-07-26 11:17:04.936214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.579 [2024-07-26 11:17:04.936257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.579 [2024-07-26 11:17:04.936280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.579 [2024-07-26 11:17:04.936859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.579 [2024-07-26 11:17:04.937038] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.579 [2024-07-26 11:17:04.937053] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.579 [2024-07-26 11:17:04.937060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.579 [2024-07-26 11:17:04.939690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.579 [2024-07-26 11:17:04.948510] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.579 [2024-07-26 11:17:04.949254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.579 [2024-07-26 11:17:04.949298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.579 [2024-07-26 11:17:04.949320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.579 [2024-07-26 11:17:04.949899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.579 [2024-07-26 11:17:04.950244] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.579 [2024-07-26 11:17:04.950255] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.579 [2024-07-26 11:17:04.950261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.579 [2024-07-26 11:17:04.952923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.579 [2024-07-26 11:17:04.961429] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.579 [2024-07-26 11:17:04.962137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.579 [2024-07-26 11:17:04.962180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.579 [2024-07-26 11:17:04.962202] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.579 [2024-07-26 11:17:04.962781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.579 [2024-07-26 11:17:04.963377] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.579 [2024-07-26 11:17:04.963404] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.579 [2024-07-26 11:17:04.963437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.579 [2024-07-26 11:17:04.966145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.579 [2024-07-26 11:17:04.974355] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.579 [2024-07-26 11:17:04.975065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.579 [2024-07-26 11:17:04.975109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.579 [2024-07-26 11:17:04.975131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.579 [2024-07-26 11:17:04.975711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.579 [2024-07-26 11:17:04.976153] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.579 [2024-07-26 11:17:04.976163] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.579 [2024-07-26 11:17:04.976170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.579 [2024-07-26 11:17:04.978834] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.579 [2024-07-26 11:17:04.987203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.579 [2024-07-26 11:17:04.987933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.579 [2024-07-26 11:17:04.987977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.579 [2024-07-26 11:17:04.987999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.579 [2024-07-26 11:17:04.988284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.579 [2024-07-26 11:17:04.988458] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.579 [2024-07-26 11:17:04.988468] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.579 [2024-07-26 11:17:04.988474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.579 [2024-07-26 11:17:04.991124] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.579 [2024-07-26 11:17:04.999990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.579 [2024-07-26 11:17:05.000435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.580 [2024-07-26 11:17:05.000451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.580 [2024-07-26 11:17:05.000458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.580 [2024-07-26 11:17:05.000621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.580 [2024-07-26 11:17:05.000784] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.580 [2024-07-26 11:17:05.000793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.580 [2024-07-26 11:17:05.000799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.580 [2024-07-26 11:17:05.003487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.580 [2024-07-26 11:17:05.012912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.580 [2024-07-26 11:17:05.013645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.580 [2024-07-26 11:17:05.013689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.580 [2024-07-26 11:17:05.013710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.580 [2024-07-26 11:17:05.014033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.580 [2024-07-26 11:17:05.014227] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.580 [2024-07-26 11:17:05.014237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.580 [2024-07-26 11:17:05.014243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.580 [2024-07-26 11:17:05.016900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.580 [2024-07-26 11:17:05.025811] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.580 [2024-07-26 11:17:05.026530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.580 [2024-07-26 11:17:05.026547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.580 [2024-07-26 11:17:05.026555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.580 [2024-07-26 11:17:05.026717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.580 [2024-07-26 11:17:05.026880] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.580 [2024-07-26 11:17:05.026889] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.580 [2024-07-26 11:17:05.026895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.580 [2024-07-26 11:17:05.029588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.580 [2024-07-26 11:17:05.038647] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.580 [2024-07-26 11:17:05.039373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.580 [2024-07-26 11:17:05.039417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.580 [2024-07-26 11:17:05.039440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.580 [2024-07-26 11:17:05.040004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.580 [2024-07-26 11:17:05.040182] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.580 [2024-07-26 11:17:05.040192] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.580 [2024-07-26 11:17:05.040198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.580 [2024-07-26 11:17:05.042804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.580 [2024-07-26 11:17:05.051646] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.580 [2024-07-26 11:17:05.052300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.580 [2024-07-26 11:17:05.052344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.580 [2024-07-26 11:17:05.052366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.580 [2024-07-26 11:17:05.052945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.580 [2024-07-26 11:17:05.053171] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.580 [2024-07-26 11:17:05.053181] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.580 [2024-07-26 11:17:05.053191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.580 [2024-07-26 11:17:05.055837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.580 [2024-07-26 11:17:05.064635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.580 [2024-07-26 11:17:05.065330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.580 [2024-07-26 11:17:05.065373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.580 [2024-07-26 11:17:05.065396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.580 [2024-07-26 11:17:05.065857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.580 [2024-07-26 11:17:05.066022] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.580 [2024-07-26 11:17:05.066032] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.580 [2024-07-26 11:17:05.066038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.580 [2024-07-26 11:17:05.068828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.841 [2024-07-26 11:17:05.077731] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.841 [2024-07-26 11:17:05.078475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.841 [2024-07-26 11:17:05.078520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.841 [2024-07-26 11:17:05.078542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.841 [2024-07-26 11:17:05.079136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.841 [2024-07-26 11:17:05.079535] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.841 [2024-07-26 11:17:05.079545] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.841 [2024-07-26 11:17:05.079552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.841 [2024-07-26 11:17:05.082345] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.841 [2024-07-26 11:17:05.090846] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.841 [2024-07-26 11:17:05.091529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.841 [2024-07-26 11:17:05.091573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.841 [2024-07-26 11:17:05.091595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.841 [2024-07-26 11:17:05.092189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.841 [2024-07-26 11:17:05.092572] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.841 [2024-07-26 11:17:05.092586] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.841 [2024-07-26 11:17:05.092595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.841 [2024-07-26 11:17:05.096644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.841 [2024-07-26 11:17:05.104315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.841 [2024-07-26 11:17:05.104877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.841 [2024-07-26 11:17:05.104927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.841 [2024-07-26 11:17:05.104950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.841 [2024-07-26 11:17:05.105394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.841 [2024-07-26 11:17:05.105563] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.841 [2024-07-26 11:17:05.105572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.841 [2024-07-26 11:17:05.105579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.841 [2024-07-26 11:17:05.108310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.841 [2024-07-26 11:17:05.117309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.841 [2024-07-26 11:17:05.118015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.841 [2024-07-26 11:17:05.118069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.841 [2024-07-26 11:17:05.118092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.841 [2024-07-26 11:17:05.118452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.841 [2024-07-26 11:17:05.118626] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.841 [2024-07-26 11:17:05.118636] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.841 [2024-07-26 11:17:05.118642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.841 [2024-07-26 11:17:05.121283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.841 [2024-07-26 11:17:05.130231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.841 [2024-07-26 11:17:05.130962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.841 [2024-07-26 11:17:05.131005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.841 [2024-07-26 11:17:05.131028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.841 [2024-07-26 11:17:05.131623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.841 [2024-07-26 11:17:05.131915] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.841 [2024-07-26 11:17:05.131924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.841 [2024-07-26 11:17:05.131931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.841 [2024-07-26 11:17:05.134725] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.841 [2024-07-26 11:17:05.143425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.841 [2024-07-26 11:17:05.144081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.841 [2024-07-26 11:17:05.144126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.841 [2024-07-26 11:17:05.144148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.841 [2024-07-26 11:17:05.144605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.841 [2024-07-26 11:17:05.144781] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.841 [2024-07-26 11:17:05.144791] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.841 [2024-07-26 11:17:05.144798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.841 [2024-07-26 11:17:05.147431] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.841 [2024-07-26 11:17:05.156360] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.841 [2024-07-26 11:17:05.157089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.841 [2024-07-26 11:17:05.157134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.841 [2024-07-26 11:17:05.157166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.841 [2024-07-26 11:17:05.157329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.841 [2024-07-26 11:17:05.157494] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.841 [2024-07-26 11:17:05.157504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.841 [2024-07-26 11:17:05.157510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.842 [2024-07-26 11:17:05.160203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.842 [2024-07-26 11:17:05.169181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.842 [2024-07-26 11:17:05.169807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.842 [2024-07-26 11:17:05.169850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.842 [2024-07-26 11:17:05.169872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.842 [2024-07-26 11:17:05.170467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.842 [2024-07-26 11:17:05.170936] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.842 [2024-07-26 11:17:05.170945] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.842 [2024-07-26 11:17:05.170953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.842 [2024-07-26 11:17:05.173576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.842 [2024-07-26 11:17:05.182181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.842 [2024-07-26 11:17:05.182914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.842 [2024-07-26 11:17:05.182957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.842 [2024-07-26 11:17:05.182980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.842 [2024-07-26 11:17:05.183341] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.842 [2024-07-26 11:17:05.183535] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.842 [2024-07-26 11:17:05.183547] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.842 [2024-07-26 11:17:05.183557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.842 [2024-07-26 11:17:05.187619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.842 [2024-07-26 11:17:05.195771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.842 [2024-07-26 11:17:05.196402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.842 [2024-07-26 11:17:05.196445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.842 [2024-07-26 11:17:05.196467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.842 [2024-07-26 11:17:05.197062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.842 [2024-07-26 11:17:05.197639] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.842 [2024-07-26 11:17:05.197648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.842 [2024-07-26 11:17:05.197655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.842 [2024-07-26 11:17:05.200349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.842 [2024-07-26 11:17:05.208626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.842 [2024-07-26 11:17:05.209356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.842 [2024-07-26 11:17:05.209400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.842 [2024-07-26 11:17:05.209422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.842 [2024-07-26 11:17:05.209798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.842 [2024-07-26 11:17:05.209962] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.842 [2024-07-26 11:17:05.209972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.842 [2024-07-26 11:17:05.209978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.842 [2024-07-26 11:17:05.212665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.842 [2024-07-26 11:17:05.221542] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.842 [2024-07-26 11:17:05.222266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.842 [2024-07-26 11:17:05.222308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.842 [2024-07-26 11:17:05.222330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.842 [2024-07-26 11:17:05.222679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.842 [2024-07-26 11:17:05.222843] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.842 [2024-07-26 11:17:05.222852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.842 [2024-07-26 11:17:05.222858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.842 [2024-07-26 11:17:05.225546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.842 [2024-07-26 11:17:05.234418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.842 [2024-07-26 11:17:05.234865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.842 [2024-07-26 11:17:05.234907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.842 [2024-07-26 11:17:05.234936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.842 [2024-07-26 11:17:05.235450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.842 [2024-07-26 11:17:05.235624] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.842 [2024-07-26 11:17:05.235634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.842 [2024-07-26 11:17:05.235641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.842 [2024-07-26 11:17:05.238283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.842 [2024-07-26 11:17:05.247253] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.842 [2024-07-26 11:17:05.247981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.842 [2024-07-26 11:17:05.248023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.842 [2024-07-26 11:17:05.248059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.842 [2024-07-26 11:17:05.248641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.842 [2024-07-26 11:17:05.249001] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.842 [2024-07-26 11:17:05.249011] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.842 [2024-07-26 11:17:05.249017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.842 [2024-07-26 11:17:05.251634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.842 [2024-07-26 11:17:05.260120] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.842 [2024-07-26 11:17:05.260849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.842 [2024-07-26 11:17:05.260897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.842 [2024-07-26 11:17:05.260920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.842 [2024-07-26 11:17:05.261516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.842 [2024-07-26 11:17:05.261881] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.842 [2024-07-26 11:17:05.261891] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.842 [2024-07-26 11:17:05.261897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.842 [2024-07-26 11:17:05.264523] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.842 [2024-07-26 11:17:05.273046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.842 [2024-07-26 11:17:05.273774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.842 [2024-07-26 11:17:05.273820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.842 [2024-07-26 11:17:05.273842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.842 [2024-07-26 11:17:05.274057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.842 [2024-07-26 11:17:05.274247] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.842 [2024-07-26 11:17:05.274260] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.842 [2024-07-26 11:17:05.274266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.842 [2024-07-26 11:17:05.276925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.842 [2024-07-26 11:17:05.285898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.842 [2024-07-26 11:17:05.286614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.842 [2024-07-26 11:17:05.286658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.842 [2024-07-26 11:17:05.286680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.842 [2024-07-26 11:17:05.286966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.842 [2024-07-26 11:17:05.287155] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.842 [2024-07-26 11:17:05.287166] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.842 [2024-07-26 11:17:05.287172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.842 [2024-07-26 11:17:05.289838] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.843 [2024-07-26 11:17:05.298745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.843 [2024-07-26 11:17:05.299473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.843 [2024-07-26 11:17:05.299517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.843 [2024-07-26 11:17:05.299539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.843 [2024-07-26 11:17:05.299870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.843 [2024-07-26 11:17:05.300034] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.843 [2024-07-26 11:17:05.300049] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.843 [2024-07-26 11:17:05.300055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.843 [2024-07-26 11:17:05.302737] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.843 [2024-07-26 11:17:05.311548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.843 [2024-07-26 11:17:05.312289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.843 [2024-07-26 11:17:05.312332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.843 [2024-07-26 11:17:05.312355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.843 [2024-07-26 11:17:05.312934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.843 [2024-07-26 11:17:05.313203] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.843 [2024-07-26 11:17:05.313213] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.843 [2024-07-26 11:17:05.313219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.843 [2024-07-26 11:17:05.315882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:45.843 [2024-07-26 11:17:05.324388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:45.843 [2024-07-26 11:17:05.325057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.843 [2024-07-26 11:17:05.325099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:45.843 [2024-07-26 11:17:05.325121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:45.843 [2024-07-26 11:17:05.325700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:45.843 [2024-07-26 11:17:05.326041] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:45.843 [2024-07-26 11:17:05.326055] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:45.843 [2024-07-26 11:17:05.326061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:45.843 [2024-07-26 11:17:05.328688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.103 [2024-07-26 11:17:05.337477] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.103 [2024-07-26 11:17:05.338108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.103 [2024-07-26 11:17:05.338151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.103 [2024-07-26 11:17:05.338174] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.103 [2024-07-26 11:17:05.338753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.103 [2024-07-26 11:17:05.339108] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.103 [2024-07-26 11:17:05.339118] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.103 [2024-07-26 11:17:05.339125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.103 [2024-07-26 11:17:05.341818] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.103 [2024-07-26 11:17:05.350435] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.103 [2024-07-26 11:17:05.351063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.103 [2024-07-26 11:17:05.351100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.103 [2024-07-26 11:17:05.351123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.103 [2024-07-26 11:17:05.351686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.103 [2024-07-26 11:17:05.351850] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.103 [2024-07-26 11:17:05.351860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.103 [2024-07-26 11:17:05.351866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.103 [2024-07-26 11:17:05.354553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.103 [2024-07-26 11:17:05.363314] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.103 [2024-07-26 11:17:05.363997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.103 [2024-07-26 11:17:05.364039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.104 [2024-07-26 11:17:05.364075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.104 [2024-07-26 11:17:05.364662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.104 [2024-07-26 11:17:05.365157] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.104 [2024-07-26 11:17:05.365167] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.104 [2024-07-26 11:17:05.365173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.104 [2024-07-26 11:17:05.367800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.104 [2024-07-26 11:17:05.376225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.104 [2024-07-26 11:17:05.376875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.104 [2024-07-26 11:17:05.376920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.104 [2024-07-26 11:17:05.376942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.104 [2024-07-26 11:17:05.377274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.104 [2024-07-26 11:17:05.377440] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.104 [2024-07-26 11:17:05.377449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.104 [2024-07-26 11:17:05.377455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.104 [2024-07-26 11:17:05.380145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.104 [2024-07-26 11:17:05.389218] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.104 [2024-07-26 11:17:05.390072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.104 [2024-07-26 11:17:05.390116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.104 [2024-07-26 11:17:05.390142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.104 [2024-07-26 11:17:05.390341] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.104 [2024-07-26 11:17:05.390515] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.104 [2024-07-26 11:17:05.390524] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.104 [2024-07-26 11:17:05.390530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.104 [2024-07-26 11:17:05.393396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.104 [2024-07-26 11:17:05.402328] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.104 [2024-07-26 11:17:05.403039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.104 [2024-07-26 11:17:05.403063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.104 [2024-07-26 11:17:05.403070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.104 [2024-07-26 11:17:05.403248] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.104 [2024-07-26 11:17:05.403426] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.104 [2024-07-26 11:17:05.403436] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.104 [2024-07-26 11:17:05.403447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.104 [2024-07-26 11:17:05.406280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.104 [2024-07-26 11:17:05.415476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.104 [2024-07-26 11:17:05.416075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.104 [2024-07-26 11:17:05.416094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.104 [2024-07-26 11:17:05.416101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.104 [2024-07-26 11:17:05.416279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.104 [2024-07-26 11:17:05.416457] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.104 [2024-07-26 11:17:05.416467] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.104 [2024-07-26 11:17:05.416474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.104 [2024-07-26 11:17:05.419310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.104 [2024-07-26 11:17:05.428661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.104 [2024-07-26 11:17:05.429101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.104 [2024-07-26 11:17:05.429118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.104 [2024-07-26 11:17:05.429126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.104 [2024-07-26 11:17:05.429303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.104 [2024-07-26 11:17:05.429483] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.104 [2024-07-26 11:17:05.429493] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.104 [2024-07-26 11:17:05.429499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.104 [2024-07-26 11:17:05.432331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.104 [2024-07-26 11:17:05.441863] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.104 [2024-07-26 11:17:05.442604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.104 [2024-07-26 11:17:05.442621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.104 [2024-07-26 11:17:05.442628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.104 [2024-07-26 11:17:05.442805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.104 [2024-07-26 11:17:05.442984] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.104 [2024-07-26 11:17:05.442994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.104 [2024-07-26 11:17:05.443001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.104 [2024-07-26 11:17:05.445832] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.104 [2024-07-26 11:17:05.455021] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.104 [2024-07-26 11:17:05.455756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.104 [2024-07-26 11:17:05.455775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.104 [2024-07-26 11:17:05.455783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.104 [2024-07-26 11:17:05.455960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.104 [2024-07-26 11:17:05.456143] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.104 [2024-07-26 11:17:05.456154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.104 [2024-07-26 11:17:05.456161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.104 [2024-07-26 11:17:05.458992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.104 [2024-07-26 11:17:05.468193] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.104 [2024-07-26 11:17:05.468910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.104 [2024-07-26 11:17:05.468928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.104 [2024-07-26 11:17:05.468936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.104 [2024-07-26 11:17:05.469118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.104 [2024-07-26 11:17:05.469297] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.104 [2024-07-26 11:17:05.469306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.104 [2024-07-26 11:17:05.469313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.104 [2024-07-26 11:17:05.472142] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.104 [2024-07-26 11:17:05.481331] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.104 [2024-07-26 11:17:05.482070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.104 [2024-07-26 11:17:05.482088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.104 [2024-07-26 11:17:05.482095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.104 [2024-07-26 11:17:05.482273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.104 [2024-07-26 11:17:05.482457] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.104 [2024-07-26 11:17:05.482467] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.104 [2024-07-26 11:17:05.482474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.104 [2024-07-26 11:17:05.485303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.104 [2024-07-26 11:17:05.494490] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.104 [2024-07-26 11:17:05.495203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.104 [2024-07-26 11:17:05.495220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.104 [2024-07-26 11:17:05.495228] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.105 [2024-07-26 11:17:05.495405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.105 [2024-07-26 11:17:05.495586] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.105 [2024-07-26 11:17:05.495596] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.105 [2024-07-26 11:17:05.495603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.105 [2024-07-26 11:17:05.498438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.105 [2024-07-26 11:17:05.507626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.105 [2024-07-26 11:17:05.508256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.105 [2024-07-26 11:17:05.508273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.105 [2024-07-26 11:17:05.508281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.105 [2024-07-26 11:17:05.508458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.105 [2024-07-26 11:17:05.508636] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.105 [2024-07-26 11:17:05.508646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.105 [2024-07-26 11:17:05.508653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.105 [2024-07-26 11:17:05.511483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.105 [2024-07-26 11:17:05.520668] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.105 [2024-07-26 11:17:05.521362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.105 [2024-07-26 11:17:05.521379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.105 [2024-07-26 11:17:05.521387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.105 [2024-07-26 11:17:05.521564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.105 [2024-07-26 11:17:05.521741] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.105 [2024-07-26 11:17:05.521751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.105 [2024-07-26 11:17:05.521758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.105 [2024-07-26 11:17:05.524589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.105 [2024-07-26 11:17:05.533772] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.105 [2024-07-26 11:17:05.534432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.105 [2024-07-26 11:17:05.534449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.105 [2024-07-26 11:17:05.534456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.105 [2024-07-26 11:17:05.534633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.105 [2024-07-26 11:17:05.534810] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.105 [2024-07-26 11:17:05.534820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.105 [2024-07-26 11:17:05.534827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.105 [2024-07-26 11:17:05.537664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.105 [2024-07-26 11:17:05.546854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.105 [2024-07-26 11:17:05.547580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.105 [2024-07-26 11:17:05.547597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.105 [2024-07-26 11:17:05.547605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.105 [2024-07-26 11:17:05.547782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.105 [2024-07-26 11:17:05.547959] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.105 [2024-07-26 11:17:05.547969] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.105 [2024-07-26 11:17:05.547976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.105 [2024-07-26 11:17:05.550807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.105 [2024-07-26 11:17:05.559993] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.105 [2024-07-26 11:17:05.560705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.105 [2024-07-26 11:17:05.560722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.105 [2024-07-26 11:17:05.560730] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.105 [2024-07-26 11:17:05.560906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.105 [2024-07-26 11:17:05.561090] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.105 [2024-07-26 11:17:05.561100] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.105 [2024-07-26 11:17:05.561107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.105 [2024-07-26 11:17:05.563930] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.105 [2024-07-26 11:17:05.573132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.105 [2024-07-26 11:17:05.573840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.105 [2024-07-26 11:17:05.573857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.105 [2024-07-26 11:17:05.573865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.105 [2024-07-26 11:17:05.574041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.105 [2024-07-26 11:17:05.574225] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.105 [2024-07-26 11:17:05.574235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.105 [2024-07-26 11:17:05.574241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.105 [2024-07-26 11:17:05.577079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.105 [2024-07-26 11:17:05.586274] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.105 [2024-07-26 11:17:05.587005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.105 [2024-07-26 11:17:05.587022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.105 [2024-07-26 11:17:05.587032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.105 [2024-07-26 11:17:05.587216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.105 [2024-07-26 11:17:05.587395] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.105 [2024-07-26 11:17:05.587405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.105 [2024-07-26 11:17:05.587411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.105 [2024-07-26 11:17:05.590238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.366 [2024-07-26 11:17:05.599431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.366 [2024-07-26 11:17:05.600164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-26 11:17:05.600182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.366 [2024-07-26 11:17:05.600190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.366 [2024-07-26 11:17:05.600366] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.366 [2024-07-26 11:17:05.600546] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.366 [2024-07-26 11:17:05.600556] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.366 [2024-07-26 11:17:05.600562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.366 [2024-07-26 11:17:05.603423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.366 [2024-07-26 11:17:05.612612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.366 [2024-07-26 11:17:05.613346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-26 11:17:05.613363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.366 [2024-07-26 11:17:05.613371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.366 [2024-07-26 11:17:05.613548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.366 [2024-07-26 11:17:05.613727] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.366 [2024-07-26 11:17:05.613737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.366 [2024-07-26 11:17:05.613744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.366 [2024-07-26 11:17:05.616574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.366 [2024-07-26 11:17:05.625809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.366 [2024-07-26 11:17:05.626551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-26 11:17:05.626567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.366 [2024-07-26 11:17:05.626575] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.366 [2024-07-26 11:17:05.626752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.366 [2024-07-26 11:17:05.626930] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.366 [2024-07-26 11:17:05.626943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.366 [2024-07-26 11:17:05.626951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.366 [2024-07-26 11:17:05.629782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.366 [2024-07-26 11:17:05.638972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.366 [2024-07-26 11:17:05.639637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-26 11:17:05.639654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.366 [2024-07-26 11:17:05.639662] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.366 [2024-07-26 11:17:05.639839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.366 [2024-07-26 11:17:05.640017] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.366 [2024-07-26 11:17:05.640027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.366 [2024-07-26 11:17:05.640033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.366 [2024-07-26 11:17:05.642869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.366 [2024-07-26 11:17:05.652079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.366 [2024-07-26 11:17:05.652813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-26 11:17:05.652831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.366 [2024-07-26 11:17:05.652839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.366 [2024-07-26 11:17:05.653017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.366 [2024-07-26 11:17:05.653201] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.366 [2024-07-26 11:17:05.653211] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.366 [2024-07-26 11:17:05.653218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.366 [2024-07-26 11:17:05.656047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.366 [2024-07-26 11:17:05.665263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.366 [2024-07-26 11:17:05.665993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-26 11:17:05.666010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.366 [2024-07-26 11:17:05.666017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.366 [2024-07-26 11:17:05.666203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.366 [2024-07-26 11:17:05.666382] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.366 [2024-07-26 11:17:05.666392] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.366 [2024-07-26 11:17:05.666398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.366 [2024-07-26 11:17:05.669237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.366 [2024-07-26 11:17:05.678584] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.366 [2024-07-26 11:17:05.679306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-26 11:17:05.679323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.366 [2024-07-26 11:17:05.679330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.366 [2024-07-26 11:17:05.679508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.366 [2024-07-26 11:17:05.679687] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.366 [2024-07-26 11:17:05.679697] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.366 [2024-07-26 11:17:05.679704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.366 [2024-07-26 11:17:05.682535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.366 [2024-07-26 11:17:05.691725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.366 [2024-07-26 11:17:05.692438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-26 11:17:05.692456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.366 [2024-07-26 11:17:05.692463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.366 [2024-07-26 11:17:05.692640] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.366 [2024-07-26 11:17:05.692819] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.366 [2024-07-26 11:17:05.692828] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.366 [2024-07-26 11:17:05.692835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.366 [2024-07-26 11:17:05.695668] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.366 [2024-07-26 11:17:05.704854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.366 [2024-07-26 11:17:05.705580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-26 11:17:05.705598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.367 [2024-07-26 11:17:05.705606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.367 [2024-07-26 11:17:05.705782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.367 [2024-07-26 11:17:05.705959] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.367 [2024-07-26 11:17:05.705969] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.367 [2024-07-26 11:17:05.705976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.367 [2024-07-26 11:17:05.708808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.367 [2024-07-26 11:17:05.717995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.367 [2024-07-26 11:17:05.718729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-26 11:17:05.718747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.367 [2024-07-26 11:17:05.718754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.367 [2024-07-26 11:17:05.718935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.367 [2024-07-26 11:17:05.719120] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.367 [2024-07-26 11:17:05.719130] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.367 [2024-07-26 11:17:05.719137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.367 [2024-07-26 11:17:05.721964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.367 [2024-07-26 11:17:05.731146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.367 [2024-07-26 11:17:05.731864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-26 11:17:05.731881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.367 [2024-07-26 11:17:05.731889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.367 [2024-07-26 11:17:05.732073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.367 [2024-07-26 11:17:05.732252] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.367 [2024-07-26 11:17:05.732262] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.367 [2024-07-26 11:17:05.732269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.367 [2024-07-26 11:17:05.735099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.367 [2024-07-26 11:17:05.744280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.367 [2024-07-26 11:17:05.745003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-26 11:17:05.745057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.367 [2024-07-26 11:17:05.745080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.367 [2024-07-26 11:17:05.745635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.367 [2024-07-26 11:17:05.745813] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.367 [2024-07-26 11:17:05.745823] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.367 [2024-07-26 11:17:05.745831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.367 [2024-07-26 11:17:05.748658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.367 [2024-07-26 11:17:05.757231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.367 [2024-07-26 11:17:05.757906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-26 11:17:05.757948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.367 [2024-07-26 11:17:05.757971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.367 [2024-07-26 11:17:05.758566] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.367 [2024-07-26 11:17:05.759064] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.367 [2024-07-26 11:17:05.759074] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.367 [2024-07-26 11:17:05.759084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.367 [2024-07-26 11:17:05.761817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.367 [2024-07-26 11:17:05.770027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.367 [2024-07-26 11:17:05.770760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-26 11:17:05.770803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.367 [2024-07-26 11:17:05.770826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.367 [2024-07-26 11:17:05.771109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.367 [2024-07-26 11:17:05.771283] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.367 [2024-07-26 11:17:05.771293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.367 [2024-07-26 11:17:05.771299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.367 [2024-07-26 11:17:05.773955] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.367 [2024-07-26 11:17:05.782923] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.367 [2024-07-26 11:17:05.783855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-26 11:17:05.783891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.367 [2024-07-26 11:17:05.783900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.367 [2024-07-26 11:17:05.784085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.367 [2024-07-26 11:17:05.784265] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.367 [2024-07-26 11:17:05.784277] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.367 [2024-07-26 11:17:05.784284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.367 [2024-07-26 11:17:05.787029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.367 [2024-07-26 11:17:05.795947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.367 [2024-07-26 11:17:05.796642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-26 11:17:05.796687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.367 [2024-07-26 11:17:05.796710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.367 [2024-07-26 11:17:05.797066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.367 [2024-07-26 11:17:05.797241] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.367 [2024-07-26 11:17:05.797250] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.367 [2024-07-26 11:17:05.797257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.367 [2024-07-26 11:17:05.800001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.367 [2024-07-26 11:17:05.808844] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.367 [2024-07-26 11:17:05.809551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-26 11:17:05.809601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.367 [2024-07-26 11:17:05.809624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.367 [2024-07-26 11:17:05.810222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.367 [2024-07-26 11:17:05.810387] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.367 [2024-07-26 11:17:05.810396] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.367 [2024-07-26 11:17:05.810403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.367 [2024-07-26 11:17:05.813068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.367 [2024-07-26 11:17:05.821876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.367 [2024-07-26 11:17:05.822540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-26 11:17:05.822584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.367 [2024-07-26 11:17:05.822607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.367 [2024-07-26 11:17:05.822975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.367 [2024-07-26 11:17:05.823157] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.367 [2024-07-26 11:17:05.823166] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.367 [2024-07-26 11:17:05.823172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.367 [2024-07-26 11:17:05.825855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.367 [2024-07-26 11:17:05.834827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.367 [2024-07-26 11:17:05.835551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-26 11:17:05.835595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.367 [2024-07-26 11:17:05.835616] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.367 [2024-07-26 11:17:05.835989] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.367 [2024-07-26 11:17:05.836181] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.367 [2024-07-26 11:17:05.836191] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.367 [2024-07-26 11:17:05.836197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.367 [2024-07-26 11:17:05.838859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.367 [2024-07-26 11:17:05.847774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.367 [2024-07-26 11:17:05.848432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-26 11:17:05.848476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.367 [2024-07-26 11:17:05.848497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.367 [2024-07-26 11:17:05.849004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.367 [2024-07-26 11:17:05.849200] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.367 [2024-07-26 11:17:05.849211] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.367 [2024-07-26 11:17:05.849217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.367 [2024-07-26 11:17:05.851878] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.629 [2024-07-26 11:17:05.860840] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.629 [2024-07-26 11:17:05.861618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.629 [2024-07-26 11:17:05.861662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.629 [2024-07-26 11:17:05.861683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.629 [2024-07-26 11:17:05.862054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.629 [2024-07-26 11:17:05.862228] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.629 [2024-07-26 11:17:05.862238] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.629 [2024-07-26 11:17:05.862244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.629 [2024-07-26 11:17:05.864940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.629 [2024-07-26 11:17:05.873884] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.629 [2024-07-26 11:17:05.874635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.629 [2024-07-26 11:17:05.874678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.629 [2024-07-26 11:17:05.874699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.629 [2024-07-26 11:17:05.875172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.629 [2024-07-26 11:17:05.875347] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.629 [2024-07-26 11:17:05.875357] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.629 [2024-07-26 11:17:05.875363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.629 [2024-07-26 11:17:05.878019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.629 [2024-07-26 11:17:05.886679] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.629 [2024-07-26 11:17:05.887405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.629 [2024-07-26 11:17:05.887449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.629 [2024-07-26 11:17:05.887470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.629 [2024-07-26 11:17:05.888064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.629 [2024-07-26 11:17:05.888480] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.629 [2024-07-26 11:17:05.888489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.629 [2024-07-26 11:17:05.888495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.629 [2024-07-26 11:17:05.891092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.629 [2024-07-26 11:17:05.899812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.629 [2024-07-26 11:17:05.900566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.629 [2024-07-26 11:17:05.900612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.629 [2024-07-26 11:17:05.900633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.629 [2024-07-26 11:17:05.901184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.629 [2024-07-26 11:17:05.901358] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.629 [2024-07-26 11:17:05.901368] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.629 [2024-07-26 11:17:05.901375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.629 [2024-07-26 11:17:05.904028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.629 [2024-07-26 11:17:05.912639] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.629 [2024-07-26 11:17:05.913368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.629 [2024-07-26 11:17:05.913413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.629 [2024-07-26 11:17:05.913434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.629 [2024-07-26 11:17:05.914014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.629 [2024-07-26 11:17:05.914485] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.629 [2024-07-26 11:17:05.914495] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.629 [2024-07-26 11:17:05.914501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.629 [2024-07-26 11:17:05.917234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.629 [2024-07-26 11:17:05.925606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.629 [2024-07-26 11:17:05.926340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.629 [2024-07-26 11:17:05.926384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.629 [2024-07-26 11:17:05.926405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.629 [2024-07-26 11:17:05.926984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.629 [2024-07-26 11:17:05.927299] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.629 [2024-07-26 11:17:05.927309] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.629 [2024-07-26 11:17:05.927315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.629 [2024-07-26 11:17:05.929969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.629 [2024-07-26 11:17:05.938412] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.629 [2024-07-26 11:17:05.939142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.629 [2024-07-26 11:17:05.939186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.629 [2024-07-26 11:17:05.939215] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.629 [2024-07-26 11:17:05.939795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.629 [2024-07-26 11:17:05.940391] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.629 [2024-07-26 11:17:05.940402] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.629 [2024-07-26 11:17:05.940408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.629 [2024-07-26 11:17:05.943060] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.629 [2024-07-26 11:17:05.951349] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.629 [2024-07-26 11:17:05.952079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.629 [2024-07-26 11:17:05.952122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.629 [2024-07-26 11:17:05.952143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.629 [2024-07-26 11:17:05.952722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.629 [2024-07-26 11:17:05.953051] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.629 [2024-07-26 11:17:05.953065] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.629 [2024-07-26 11:17:05.953074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.629 [2024-07-26 11:17:05.957121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.629 [2024-07-26 11:17:05.964831] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.629 [2024-07-26 11:17:05.965540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.629 [2024-07-26 11:17:05.965582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.629 [2024-07-26 11:17:05.965604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.629 [2024-07-26 11:17:05.966198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.629 [2024-07-26 11:17:05.966576] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.629 [2024-07-26 11:17:05.966587] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.629 [2024-07-26 11:17:05.966594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.629 [2024-07-26 11:17:05.969303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.629 [2024-07-26 11:17:05.977740] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.629 [2024-07-26 11:17:05.978438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.629 [2024-07-26 11:17:05.978480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.630 [2024-07-26 11:17:05.978501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.630 [2024-07-26 11:17:05.979093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.630 [2024-07-26 11:17:05.979554] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.630 [2024-07-26 11:17:05.979566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.630 [2024-07-26 11:17:05.979573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.630 [2024-07-26 11:17:05.982214] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.630 [2024-07-26 11:17:05.990631] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.630 [2024-07-26 11:17:05.991356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.630 [2024-07-26 11:17:05.991400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.630 [2024-07-26 11:17:05.991421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.630 [2024-07-26 11:17:05.991655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.630 [2024-07-26 11:17:05.991818] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.630 [2024-07-26 11:17:05.991828] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.630 [2024-07-26 11:17:05.991834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.630 [2024-07-26 11:17:05.994526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.630 [2024-07-26 11:17:06.003549] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.630 [2024-07-26 11:17:06.004252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.630 [2024-07-26 11:17:06.004294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.630 [2024-07-26 11:17:06.004315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.630 [2024-07-26 11:17:06.004673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.630 [2024-07-26 11:17:06.004838] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.630 [2024-07-26 11:17:06.004847] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.630 [2024-07-26 11:17:06.004853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.630 [2024-07-26 11:17:06.007544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.630 [2024-07-26 11:17:06.016353] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.630 [2024-07-26 11:17:06.017083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.630 [2024-07-26 11:17:06.017126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.630 [2024-07-26 11:17:06.017147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.630 [2024-07-26 11:17:06.017412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.630 [2024-07-26 11:17:06.017576] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.630 [2024-07-26 11:17:06.017585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.630 [2024-07-26 11:17:06.017591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.630 [2024-07-26 11:17:06.020278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.630 [2024-07-26 11:17:06.029239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.630 [2024-07-26 11:17:06.029969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.630 [2024-07-26 11:17:06.030011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.630 [2024-07-26 11:17:06.030031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.630 [2024-07-26 11:17:06.030627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.630 [2024-07-26 11:17:06.030905] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.630 [2024-07-26 11:17:06.030915] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.630 [2024-07-26 11:17:06.030922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.630 [2024-07-26 11:17:06.033548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.630 [2024-07-26 11:17:06.042058] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.630 [2024-07-26 11:17:06.042710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.630 [2024-07-26 11:17:06.042753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.630 [2024-07-26 11:17:06.042775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.630 [2024-07-26 11:17:06.043367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.630 [2024-07-26 11:17:06.043789] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.630 [2024-07-26 11:17:06.043799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.630 [2024-07-26 11:17:06.043805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.630 [2024-07-26 11:17:06.047634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.630 [2024-07-26 11:17:06.055780] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.630 [2024-07-26 11:17:06.056517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.630 [2024-07-26 11:17:06.056561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.630 [2024-07-26 11:17:06.056583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.630 [2024-07-26 11:17:06.056855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.630 [2024-07-26 11:17:06.057023] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.630 [2024-07-26 11:17:06.057033] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.630 [2024-07-26 11:17:06.057039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.630 [2024-07-26 11:17:06.059771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.630 [2024-07-26 11:17:06.068691] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.630 [2024-07-26 11:17:06.069387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.630 [2024-07-26 11:17:06.069403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.630 [2024-07-26 11:17:06.069409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.630 [2024-07-26 11:17:06.069576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.630 [2024-07-26 11:17:06.069738] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.630 [2024-07-26 11:17:06.069755] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.630 [2024-07-26 11:17:06.069761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.630 [2024-07-26 11:17:06.072457] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.630 [2024-07-26 11:17:06.081479] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.630 [2024-07-26 11:17:06.082209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.630 [2024-07-26 11:17:06.082253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.630 [2024-07-26 11:17:06.082274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.630 [2024-07-26 11:17:06.082577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.630 [2024-07-26 11:17:06.082742] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.630 [2024-07-26 11:17:06.082752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.630 [2024-07-26 11:17:06.082758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.630 [2024-07-26 11:17:06.085457] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.630 [2024-07-26 11:17:06.094373] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.630 [2024-07-26 11:17:06.095139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.630 [2024-07-26 11:17:06.095182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.630 [2024-07-26 11:17:06.095203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.630 [2024-07-26 11:17:06.095770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.630 [2024-07-26 11:17:06.095933] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.630 [2024-07-26 11:17:06.095943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.630 [2024-07-26 11:17:06.095949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.630 [2024-07-26 11:17:06.098641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.630 [2024-07-26 11:17:06.107192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.630 [2024-07-26 11:17:06.107898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.630 [2024-07-26 11:17:06.107941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.631 [2024-07-26 11:17:06.107962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.631 [2024-07-26 11:17:06.108555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.631 [2024-07-26 11:17:06.108957] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.631 [2024-07-26 11:17:06.108967] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.631 [2024-07-26 11:17:06.108977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.631 [2024-07-26 11:17:06.111650] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.631 [2024-07-26 11:17:06.120220] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.631 [2024-07-26 11:17:06.120936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.631 [2024-07-26 11:17:06.120953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.631 [2024-07-26 11:17:06.120960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.631 [2024-07-26 11:17:06.121139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.631 [2024-07-26 11:17:06.121313] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.631 [2024-07-26 11:17:06.121323] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.631 [2024-07-26 11:17:06.121329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.892 [2024-07-26 11:17:06.124108] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.892 [2024-07-26 11:17:06.133053] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.892 [2024-07-26 11:17:06.133805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.892 [2024-07-26 11:17:06.133849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.892 [2024-07-26 11:17:06.133871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.892 [2024-07-26 11:17:06.134401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.892 [2024-07-26 11:17:06.134656] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.892 [2024-07-26 11:17:06.134670] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.892 [2024-07-26 11:17:06.134680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.892 [2024-07-26 11:17:06.138740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.892 [2024-07-26 11:17:06.146546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.892 [2024-07-26 11:17:06.147275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.892 [2024-07-26 11:17:06.147318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.892 [2024-07-26 11:17:06.147341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.892 [2024-07-26 11:17:06.147867] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.892 [2024-07-26 11:17:06.148049] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.892 [2024-07-26 11:17:06.148059] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.892 [2024-07-26 11:17:06.148066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.892 [2024-07-26 11:17:06.150958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.892 [2024-07-26 11:17:06.159610] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.892 [2024-07-26 11:17:06.160265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.892 [2024-07-26 11:17:06.160287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.892 [2024-07-26 11:17:06.160294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.892 [2024-07-26 11:17:06.160471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.892 [2024-07-26 11:17:06.160634] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.892 [2024-07-26 11:17:06.160643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.892 [2024-07-26 11:17:06.160649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.892 [2024-07-26 11:17:06.163400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.893 [2024-07-26 11:17:06.172647] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.893 [2024-07-26 11:17:06.173376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.893 [2024-07-26 11:17:06.173420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.893 [2024-07-26 11:17:06.173441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.893 [2024-07-26 11:17:06.174022] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.893 [2024-07-26 11:17:06.174292] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.893 [2024-07-26 11:17:06.174302] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.893 [2024-07-26 11:17:06.174309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.893 [2024-07-26 11:17:06.177021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.893 [2024-07-26 11:17:06.185562] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.893 [2024-07-26 11:17:06.186260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.893 [2024-07-26 11:17:06.186304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.893 [2024-07-26 11:17:06.186325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.893 [2024-07-26 11:17:06.186904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.893 [2024-07-26 11:17:06.187182] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.893 [2024-07-26 11:17:06.187192] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.893 [2024-07-26 11:17:06.187198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.893 [2024-07-26 11:17:06.189880] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.893 [2024-07-26 11:17:06.198387] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.893 [2024-07-26 11:17:06.199018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.893 [2024-07-26 11:17:06.199073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.893 [2024-07-26 11:17:06.199096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.893 [2024-07-26 11:17:06.199560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.893 [2024-07-26 11:17:06.199727] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.893 [2024-07-26 11:17:06.199738] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.893 [2024-07-26 11:17:06.199744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.893 [2024-07-26 11:17:06.202486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.893 [2024-07-26 11:17:06.211290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.893 [2024-07-26 11:17:06.211978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.893 [2024-07-26 11:17:06.212021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.893 [2024-07-26 11:17:06.212057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.893 [2024-07-26 11:17:06.212372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.893 [2024-07-26 11:17:06.212547] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.893 [2024-07-26 11:17:06.212556] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.893 [2024-07-26 11:17:06.212563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.893 [2024-07-26 11:17:06.215210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.893 [2024-07-26 11:17:06.224081] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.893 [2024-07-26 11:17:06.224807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.893 [2024-07-26 11:17:06.224850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.893 [2024-07-26 11:17:06.224871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.893 [2024-07-26 11:17:06.225462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.893 [2024-07-26 11:17:06.226027] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.893 [2024-07-26 11:17:06.226037] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.893 [2024-07-26 11:17:06.226048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.893 [2024-07-26 11:17:06.228679] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.893 [2024-07-26 11:17:06.236875] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.893 [2024-07-26 11:17:06.237604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.893 [2024-07-26 11:17:06.237648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.893 [2024-07-26 11:17:06.237669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.893 [2024-07-26 11:17:06.238213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.893 [2024-07-26 11:17:06.238387] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.893 [2024-07-26 11:17:06.238397] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.893 [2024-07-26 11:17:06.238404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.893 [2024-07-26 11:17:06.241095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.893 [2024-07-26 11:17:06.249821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.893 [2024-07-26 11:17:06.250557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.893 [2024-07-26 11:17:06.250573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.893 [2024-07-26 11:17:06.250580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.893 [2024-07-26 11:17:06.250743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.893 [2024-07-26 11:17:06.250906] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.893 [2024-07-26 11:17:06.250914] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.893 [2024-07-26 11:17:06.250920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.893 [2024-07-26 11:17:06.253792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.893 [2024-07-26 11:17:06.262670] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.893 [2024-07-26 11:17:06.263413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.893 [2024-07-26 11:17:06.263458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.893 [2024-07-26 11:17:06.263480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.893 [2024-07-26 11:17:06.264009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.893 [2024-07-26 11:17:06.264201] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.893 [2024-07-26 11:17:06.264211] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.893 [2024-07-26 11:17:06.264218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.893 [2024-07-26 11:17:06.266881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.893 [2024-07-26 11:17:06.275559] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.893 [2024-07-26 11:17:06.276259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.893 [2024-07-26 11:17:06.276304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.893 [2024-07-26 11:17:06.276325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.893 [2024-07-26 11:17:06.276905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.893 [2024-07-26 11:17:06.277088] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.893 [2024-07-26 11:17:06.277113] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.893 [2024-07-26 11:17:06.277120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.893 [2024-07-26 11:17:06.279786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.893 [2024-07-26 11:17:06.288459] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.893 [2024-07-26 11:17:06.289191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.893 [2024-07-26 11:17:06.289235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.893 [2024-07-26 11:17:06.289273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.893 [2024-07-26 11:17:06.289592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.893 [2024-07-26 11:17:06.289757] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.893 [2024-07-26 11:17:06.289766] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.893 [2024-07-26 11:17:06.289772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.893 [2024-07-26 11:17:06.292458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.893 [2024-07-26 11:17:06.301322] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.894 [2024-07-26 11:17:06.302061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.894 [2024-07-26 11:17:06.302106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.894 [2024-07-26 11:17:06.302127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.894 [2024-07-26 11:17:06.302523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.894 [2024-07-26 11:17:06.302688] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.894 [2024-07-26 11:17:06.302697] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.894 [2024-07-26 11:17:06.302703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.894 [2024-07-26 11:17:06.305443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.894 [2024-07-26 11:17:06.314224] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.894 [2024-07-26 11:17:06.314945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.894 [2024-07-26 11:17:06.314988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.894 [2024-07-26 11:17:06.315009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.894 [2024-07-26 11:17:06.315604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.894 [2024-07-26 11:17:06.315878] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.894 [2024-07-26 11:17:06.315888] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.894 [2024-07-26 11:17:06.315894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.894 [2024-07-26 11:17:06.319769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.894 [2024-07-26 11:17:06.327729] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.894 [2024-07-26 11:17:06.328438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.894 [2024-07-26 11:17:06.328481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.894 [2024-07-26 11:17:06.328502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.894 [2024-07-26 11:17:06.329094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.894 [2024-07-26 11:17:06.329592] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.894 [2024-07-26 11:17:06.329604] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.894 [2024-07-26 11:17:06.329611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.894 [2024-07-26 11:17:06.332305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.894 [2024-07-26 11:17:06.340554] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.894 [2024-07-26 11:17:06.341288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.894 [2024-07-26 11:17:06.341331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.894 [2024-07-26 11:17:06.341352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.894 [2024-07-26 11:17:06.341577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.894 [2024-07-26 11:17:06.341741] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.894 [2024-07-26 11:17:06.341750] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.894 [2024-07-26 11:17:06.341757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.894 [2024-07-26 11:17:06.344446] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.894 [2024-07-26 11:17:06.353356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.894 [2024-07-26 11:17:06.354076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.894 [2024-07-26 11:17:06.354120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.894 [2024-07-26 11:17:06.354141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.894 [2024-07-26 11:17:06.354496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.894 [2024-07-26 11:17:06.354659] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.894 [2024-07-26 11:17:06.354669] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.894 [2024-07-26 11:17:06.354675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.894 [2024-07-26 11:17:06.357369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.894 [2024-07-26 11:17:06.366176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.894 [2024-07-26 11:17:06.366918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.894 [2024-07-26 11:17:06.366963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.894 [2024-07-26 11:17:06.366985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.894 [2024-07-26 11:17:06.367478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.894 [2024-07-26 11:17:06.367652] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.894 [2024-07-26 11:17:06.367662] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.894 [2024-07-26 11:17:06.367669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.894 [2024-07-26 11:17:06.370316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.894 [2024-07-26 11:17:06.378986] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.894 [2024-07-26 11:17:06.379694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.894 [2024-07-26 11:17:06.379737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:46.894 [2024-07-26 11:17:06.379758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:46.894 [2024-07-26 11:17:06.380351] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:46.894 [2024-07-26 11:17:06.380748] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.894 [2024-07-26 11:17:06.380757] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.894 [2024-07-26 11:17:06.380764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.894 [2024-07-26 11:17:06.383537] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.155 [2024-07-26 11:17:06.391947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.155 [2024-07-26 11:17:06.392691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.155 [2024-07-26 11:17:06.392734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.155 [2024-07-26 11:17:06.392757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.155 [2024-07-26 11:17:06.393350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.155 [2024-07-26 11:17:06.393665] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.156 [2024-07-26 11:17:06.393675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.156 [2024-07-26 11:17:06.393681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.156 [2024-07-26 11:17:06.396315] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.156 [2024-07-26 11:17:06.404912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.156 [2024-07-26 11:17:06.405682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.156 [2024-07-26 11:17:06.405725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.156 [2024-07-26 11:17:06.405746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.156 [2024-07-26 11:17:06.406339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.156 [2024-07-26 11:17:06.406814] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.156 [2024-07-26 11:17:06.406827] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.156 [2024-07-26 11:17:06.406836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.156 [2024-07-26 11:17:06.410900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.156 [2024-07-26 11:17:06.418422] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.156 [2024-07-26 11:17:06.419090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.156 [2024-07-26 11:17:06.419135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.156 [2024-07-26 11:17:06.419158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.156 [2024-07-26 11:17:06.419452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.156 [2024-07-26 11:17:06.419621] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.156 [2024-07-26 11:17:06.419630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.156 [2024-07-26 11:17:06.419637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.156 [2024-07-26 11:17:06.422326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.156 [2024-07-26 11:17:06.431223] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.156 [2024-07-26 11:17:06.431949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.156 [2024-07-26 11:17:06.431992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.156 [2024-07-26 11:17:06.432013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.156 [2024-07-26 11:17:06.432490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.156 [2024-07-26 11:17:06.432670] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.156 [2024-07-26 11:17:06.432680] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.156 [2024-07-26 11:17:06.432687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.156 [2024-07-26 11:17:06.435464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.156 [2024-07-26 11:17:06.444026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.156 [2024-07-26 11:17:06.444756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.156 [2024-07-26 11:17:06.444800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.156 [2024-07-26 11:17:06.444822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.156 [2024-07-26 11:17:06.445414] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.156 [2024-07-26 11:17:06.445758] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.156 [2024-07-26 11:17:06.445768] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.156 [2024-07-26 11:17:06.445774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.156 [2024-07-26 11:17:06.448405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.156 [2024-07-26 11:17:06.456924] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.156 [2024-07-26 11:17:06.457649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.156 [2024-07-26 11:17:06.457692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.156 [2024-07-26 11:17:06.457713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.156 [2024-07-26 11:17:06.457986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.156 [2024-07-26 11:17:06.458176] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.156 [2024-07-26 11:17:06.458186] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.156 [2024-07-26 11:17:06.458195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.156 [2024-07-26 11:17:06.460859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.156 [2024-07-26 11:17:06.469827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.156 [2024-07-26 11:17:06.470554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.156 [2024-07-26 11:17:06.470599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.156 [2024-07-26 11:17:06.470619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.156 [2024-07-26 11:17:06.471167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.156 [2024-07-26 11:17:06.471342] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.156 [2024-07-26 11:17:06.471352] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.156 [2024-07-26 11:17:06.471358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.156 [2024-07-26 11:17:06.474014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.156 [2024-07-26 11:17:06.482674] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.156 [2024-07-26 11:17:06.483398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.156 [2024-07-26 11:17:06.483441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.156 [2024-07-26 11:17:06.483463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.156 [2024-07-26 11:17:06.483873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.156 [2024-07-26 11:17:06.484038] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.156 [2024-07-26 11:17:06.484052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.156 [2024-07-26 11:17:06.484059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.156 [2024-07-26 11:17:06.486750] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.156 [2024-07-26 11:17:06.495559] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.156 [2024-07-26 11:17:06.496196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.156 [2024-07-26 11:17:06.496240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.156 [2024-07-26 11:17:06.496263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.156 [2024-07-26 11:17:06.496531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.156 [2024-07-26 11:17:06.496695] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.156 [2024-07-26 11:17:06.496704] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.156 [2024-07-26 11:17:06.496710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.156 [2024-07-26 11:17:06.499398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.156 [2024-07-26 11:17:06.508415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.156 [2024-07-26 11:17:06.509143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.156 [2024-07-26 11:17:06.509185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.156 [2024-07-26 11:17:06.509206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.156 [2024-07-26 11:17:06.509784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.156 [2024-07-26 11:17:06.510008] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.156 [2024-07-26 11:17:06.510017] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.156 [2024-07-26 11:17:06.510023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.156 [2024-07-26 11:17:06.512715] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.156 [2024-07-26 11:17:06.521232] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.156 [2024-07-26 11:17:06.521958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.156 [2024-07-26 11:17:06.522002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.156 [2024-07-26 11:17:06.522023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.157 [2024-07-26 11:17:06.522619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.157 [2024-07-26 11:17:06.522954] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.157 [2024-07-26 11:17:06.522963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.157 [2024-07-26 11:17:06.522970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.157 [2024-07-26 11:17:06.525594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.157 [2024-07-26 11:17:06.534103] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.157 [2024-07-26 11:17:06.534757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.157 [2024-07-26 11:17:06.534798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.157 [2024-07-26 11:17:06.534820] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.157 [2024-07-26 11:17:06.535415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.157 [2024-07-26 11:17:06.535945] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.157 [2024-07-26 11:17:06.535955] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.157 [2024-07-26 11:17:06.535962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.157 [2024-07-26 11:17:06.538584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.157 [2024-07-26 11:17:06.546938] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.157 [2024-07-26 11:17:06.547670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.157 [2024-07-26 11:17:06.547713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.157 [2024-07-26 11:17:06.547733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.157 [2024-07-26 11:17:06.548154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.157 [2024-07-26 11:17:06.548412] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.157 [2024-07-26 11:17:06.548425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.157 [2024-07-26 11:17:06.548435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.157 [2024-07-26 11:17:06.552484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.157 [2024-07-26 11:17:06.560339] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.157 [2024-07-26 11:17:06.561055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.157 [2024-07-26 11:17:06.561100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.157 [2024-07-26 11:17:06.561121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.157 [2024-07-26 11:17:06.561699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.157 [2024-07-26 11:17:06.562291] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.157 [2024-07-26 11:17:06.562317] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.157 [2024-07-26 11:17:06.562338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.157 [2024-07-26 11:17:06.565059] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.157 [2024-07-26 11:17:06.573133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.157 [2024-07-26 11:17:06.573860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.157 [2024-07-26 11:17:06.573903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.157 [2024-07-26 11:17:06.573925] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.157 [2024-07-26 11:17:06.574255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.157 [2024-07-26 11:17:06.574429] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.157 [2024-07-26 11:17:06.574439] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.157 [2024-07-26 11:17:06.574446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.157 [2024-07-26 11:17:06.577264] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.157 [2024-07-26 11:17:06.586137] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.157 [2024-07-26 11:17:06.586865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.157 [2024-07-26 11:17:06.586908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.157 [2024-07-26 11:17:06.586929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.157 [2024-07-26 11:17:06.587523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.157 [2024-07-26 11:17:06.588057] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.157 [2024-07-26 11:17:06.588067] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.157 [2024-07-26 11:17:06.588074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.157 [2024-07-26 11:17:06.590690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.157 [2024-07-26 11:17:06.599046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.157 [2024-07-26 11:17:06.599771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.157 [2024-07-26 11:17:06.599813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.157 [2024-07-26 11:17:06.599834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.157 [2024-07-26 11:17:06.600421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.157 [2024-07-26 11:17:06.600596] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.157 [2024-07-26 11:17:06.600606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.157 [2024-07-26 11:17:06.600613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.157 [2024-07-26 11:17:06.603252] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.157 [2024-07-26 11:17:06.611906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.157 [2024-07-26 11:17:06.612636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.157 [2024-07-26 11:17:06.612679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.157 [2024-07-26 11:17:06.612700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.157 [2024-07-26 11:17:06.613292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.157 [2024-07-26 11:17:06.613590] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.157 [2024-07-26 11:17:06.613600] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.157 [2024-07-26 11:17:06.613607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.157 [2024-07-26 11:17:06.616249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.157 [2024-07-26 11:17:06.624805] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.157 [2024-07-26 11:17:06.625537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.157 [2024-07-26 11:17:06.625580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.157 [2024-07-26 11:17:06.625601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.157 [2024-07-26 11:17:06.625958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.157 [2024-07-26 11:17:06.626145] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.157 [2024-07-26 11:17:06.626155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.157 [2024-07-26 11:17:06.626162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.157 [2024-07-26 11:17:06.628823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.157 [2024-07-26 11:17:06.637691] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.157 [2024-07-26 11:17:06.638411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.157 [2024-07-26 11:17:06.638454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.157 [2024-07-26 11:17:06.638482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.157 [2024-07-26 11:17:06.638906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.157 [2024-07-26 11:17:06.639075] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.157 [2024-07-26 11:17:06.639085] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.157 [2024-07-26 11:17:06.639091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.157 [2024-07-26 11:17:06.641683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.418 [2024-07-26 11:17:06.650697] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.418 [2024-07-26 11:17:06.651410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.418 [2024-07-26 11:17:06.651454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.418 [2024-07-26 11:17:06.651476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.418 [2024-07-26 11:17:06.652034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.418 [2024-07-26 11:17:06.652229] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.418 [2024-07-26 11:17:06.652239] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.418 [2024-07-26 11:17:06.652245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.418 [2024-07-26 11:17:06.654899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.418 [2024-07-26 11:17:06.663624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.418 [2024-07-26 11:17:06.664414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.418 [2024-07-26 11:17:06.664456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.418 [2024-07-26 11:17:06.664477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.418 [2024-07-26 11:17:06.665070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.418 [2024-07-26 11:17:06.665614] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.418 [2024-07-26 11:17:06.665634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.418 [2024-07-26 11:17:06.665641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.418 [2024-07-26 11:17:06.668495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.418 [2024-07-26 11:17:06.676570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.418 [2024-07-26 11:17:06.677262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.418 [2024-07-26 11:17:06.677307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.418 [2024-07-26 11:17:06.677329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.418 [2024-07-26 11:17:06.677910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.418 [2024-07-26 11:17:06.678080] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.418 [2024-07-26 11:17:06.678093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.418 [2024-07-26 11:17:06.678099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.418 [2024-07-26 11:17:06.680740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.418 [2024-07-26 11:17:06.689546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.418 [2024-07-26 11:17:06.690247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.418 [2024-07-26 11:17:06.690291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.418 [2024-07-26 11:17:06.690312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.418 [2024-07-26 11:17:06.690891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.418 [2024-07-26 11:17:06.691299] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.418 [2024-07-26 11:17:06.691310] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.418 [2024-07-26 11:17:06.691317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.418 [2024-07-26 11:17:06.694033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.418 [2024-07-26 11:17:06.702463] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.419 [2024-07-26 11:17:06.703187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.419 [2024-07-26 11:17:06.703232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.419 [2024-07-26 11:17:06.703253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.419 [2024-07-26 11:17:06.703503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.419 [2024-07-26 11:17:06.703666] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.419 [2024-07-26 11:17:06.703675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.419 [2024-07-26 11:17:06.703682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.419 [2024-07-26 11:17:06.706370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.419 [2024-07-26 11:17:06.715389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.419 [2024-07-26 11:17:06.716114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.419 [2024-07-26 11:17:06.716156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.419 [2024-07-26 11:17:06.716177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.419 [2024-07-26 11:17:06.716428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.419 [2024-07-26 11:17:06.716591] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.419 [2024-07-26 11:17:06.716600] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.419 [2024-07-26 11:17:06.716606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.419 [2024-07-26 11:17:06.719297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.419 [2024-07-26 11:17:06.728250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.419 [2024-07-26 11:17:06.729011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.419 [2024-07-26 11:17:06.729068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.419 [2024-07-26 11:17:06.729091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.419 [2024-07-26 11:17:06.729408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.419 [2024-07-26 11:17:06.729572] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.419 [2024-07-26 11:17:06.729581] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.419 [2024-07-26 11:17:06.729587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.419 [2024-07-26 11:17:06.732273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.419 [2024-07-26 11:17:06.741292] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.419 [2024-07-26 11:17:06.741970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.419 [2024-07-26 11:17:06.741987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.419 [2024-07-26 11:17:06.741994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.419 [2024-07-26 11:17:06.742176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.419 [2024-07-26 11:17:06.742355] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.419 [2024-07-26 11:17:06.742365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.419 [2024-07-26 11:17:06.742371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.419 [2024-07-26 11:17:06.745200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.419 [2024-07-26 11:17:06.754379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.419 [2024-07-26 11:17:06.755113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.419 [2024-07-26 11:17:06.755130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.419 [2024-07-26 11:17:06.755138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.419 [2024-07-26 11:17:06.755322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.419 [2024-07-26 11:17:06.755495] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.419 [2024-07-26 11:17:06.755505] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.419 [2024-07-26 11:17:06.755511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.419 [2024-07-26 11:17:06.758339] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.419 [2024-07-26 11:17:06.767513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.419 [2024-07-26 11:17:06.768167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.419 [2024-07-26 11:17:06.768184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.419 [2024-07-26 11:17:06.768191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.419 [2024-07-26 11:17:06.768371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.419 [2024-07-26 11:17:06.768548] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.419 [2024-07-26 11:17:06.768565] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.419 [2024-07-26 11:17:06.768571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.419 [2024-07-26 11:17:06.771402] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.419 [2024-07-26 11:17:06.780599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.419 [2024-07-26 11:17:06.781326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.419 [2024-07-26 11:17:06.781344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.419 [2024-07-26 11:17:06.781351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.419 [2024-07-26 11:17:06.781528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.419 [2024-07-26 11:17:06.781705] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.419 [2024-07-26 11:17:06.781715] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.419 [2024-07-26 11:17:06.781722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.419 [2024-07-26 11:17:06.784552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.419 [2024-07-26 11:17:06.793778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.419 [2024-07-26 11:17:06.794509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.419 [2024-07-26 11:17:06.794527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.419 [2024-07-26 11:17:06.794536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.419 [2024-07-26 11:17:06.794716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.419 [2024-07-26 11:17:06.794893] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.419 [2024-07-26 11:17:06.794903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.419 [2024-07-26 11:17:06.794909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.419 [2024-07-26 11:17:06.797739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.419 [2024-07-26 11:17:06.806942] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.419 [2024-07-26 11:17:06.807611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.419 [2024-07-26 11:17:06.807629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.419 [2024-07-26 11:17:06.807636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.419 [2024-07-26 11:17:06.807814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.419 [2024-07-26 11:17:06.807992] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.419 [2024-07-26 11:17:06.808003] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.419 [2024-07-26 11:17:06.808014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.419 [2024-07-26 11:17:06.810848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.419 [2024-07-26 11:17:06.820055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.419 [2024-07-26 11:17:06.820692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.419 [2024-07-26 11:17:06.820709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.419 [2024-07-26 11:17:06.820717] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.419 [2024-07-26 11:17:06.820894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.419 [2024-07-26 11:17:06.821079] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.419 [2024-07-26 11:17:06.821089] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.419 [2024-07-26 11:17:06.821096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.419 [2024-07-26 11:17:06.823925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.419 [2024-07-26 11:17:06.833120] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.419 [2024-07-26 11:17:06.833753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.420 [2024-07-26 11:17:06.833771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.420 [2024-07-26 11:17:06.833778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.420 [2024-07-26 11:17:06.833955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.420 [2024-07-26 11:17:06.834140] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.420 [2024-07-26 11:17:06.834150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.420 [2024-07-26 11:17:06.834157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.420 [2024-07-26 11:17:06.836983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.420 [2024-07-26 11:17:06.846184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.420 [2024-07-26 11:17:06.846887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.420 [2024-07-26 11:17:06.846904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.420 [2024-07-26 11:17:06.846912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.420 [2024-07-26 11:17:06.847115] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.420 [2024-07-26 11:17:06.847300] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.420 [2024-07-26 11:17:06.847310] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.420 [2024-07-26 11:17:06.847317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.420 [2024-07-26 11:17:06.850186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.420 [2024-07-26 11:17:06.859375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.420 [2024-07-26 11:17:06.860114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.420 [2024-07-26 11:17:06.860131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.420 [2024-07-26 11:17:06.860138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.420 [2024-07-26 11:17:06.860315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.420 [2024-07-26 11:17:06.860492] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.420 [2024-07-26 11:17:06.860502] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.420 [2024-07-26 11:17:06.860508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.420 [2024-07-26 11:17:06.863339] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.420 [2024-07-26 11:17:06.872541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.420 [2024-07-26 11:17:06.873257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.420 [2024-07-26 11:17:06.873275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.420 [2024-07-26 11:17:06.873282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.420 [2024-07-26 11:17:06.873459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.420 [2024-07-26 11:17:06.873637] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.420 [2024-07-26 11:17:06.873647] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.420 [2024-07-26 11:17:06.873654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.420 [2024-07-26 11:17:06.876491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.420 [2024-07-26 11:17:06.885684] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.420 [2024-07-26 11:17:06.886401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.420 [2024-07-26 11:17:06.886418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.420 [2024-07-26 11:17:06.886426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.420 [2024-07-26 11:17:06.886602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.420 [2024-07-26 11:17:06.886781] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.420 [2024-07-26 11:17:06.886790] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.420 [2024-07-26 11:17:06.886797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.420 [2024-07-26 11:17:06.889639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.420 [2024-07-26 11:17:06.898831] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.420 [2024-07-26 11:17:06.899480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.420 [2024-07-26 11:17:06.899497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.420 [2024-07-26 11:17:06.899505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.420 [2024-07-26 11:17:06.899682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.420 [2024-07-26 11:17:06.899865] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.420 [2024-07-26 11:17:06.899874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.420 [2024-07-26 11:17:06.899881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.420 [2024-07-26 11:17:06.902712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.420 [2024-07-26 11:17:06.911898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.681 [2024-07-26 11:17:06.912576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.681 [2024-07-26 11:17:06.912595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.681 [2024-07-26 11:17:06.912603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.681 [2024-07-26 11:17:06.912780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.681 [2024-07-26 11:17:06.912959] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.681 [2024-07-26 11:17:06.912968] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.681 [2024-07-26 11:17:06.912975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.681 [2024-07-26 11:17:06.915870] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.681 [2024-07-26 11:17:06.925015] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.681 [2024-07-26 11:17:06.925750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.681 [2024-07-26 11:17:06.925767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.681 [2024-07-26 11:17:06.925775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.681 [2024-07-26 11:17:06.925953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.681 [2024-07-26 11:17:06.926137] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.681 [2024-07-26 11:17:06.926147] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.681 [2024-07-26 11:17:06.926154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.681 [2024-07-26 11:17:06.928986] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.681 [2024-07-26 11:17:06.938180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.681 [2024-07-26 11:17:06.938915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.681 [2024-07-26 11:17:06.938932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.681 [2024-07-26 11:17:06.938940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.681 [2024-07-26 11:17:06.939122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.681 [2024-07-26 11:17:06.939300] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.681 [2024-07-26 11:17:06.939310] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.681 [2024-07-26 11:17:06.939316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.681 [2024-07-26 11:17:06.942154] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.681 [2024-07-26 11:17:06.951343] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.681 [2024-07-26 11:17:06.952085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.681 [2024-07-26 11:17:06.952103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.681 [2024-07-26 11:17:06.952110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.681 [2024-07-26 11:17:06.952288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.681 [2024-07-26 11:17:06.952467] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.681 [2024-07-26 11:17:06.952477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.681 [2024-07-26 11:17:06.952484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.681 [2024-07-26 11:17:06.955347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.681 [2024-07-26 11:17:06.964380] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.681 [2024-07-26 11:17:06.965131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.681 [2024-07-26 11:17:06.965149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.681 [2024-07-26 11:17:06.965156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.681 [2024-07-26 11:17:06.965333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.681 [2024-07-26 11:17:06.965512] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.681 [2024-07-26 11:17:06.965522] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.681 [2024-07-26 11:17:06.965528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.681 [2024-07-26 11:17:06.968362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.681 [2024-07-26 11:17:06.977566] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.681 [2024-07-26 11:17:06.978324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.681 [2024-07-26 11:17:06.978341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.681 [2024-07-26 11:17:06.978349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.681 [2024-07-26 11:17:06.978526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.681 [2024-07-26 11:17:06.978704] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.681 [2024-07-26 11:17:06.978714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.681 [2024-07-26 11:17:06.978720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.681 [2024-07-26 11:17:06.981554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.681 [2024-07-26 11:17:06.990744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.681 [2024-07-26 11:17:06.991459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.681 [2024-07-26 11:17:06.991476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.681 [2024-07-26 11:17:06.991487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.681 [2024-07-26 11:17:06.991665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.681 [2024-07-26 11:17:06.991843] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.681 [2024-07-26 11:17:06.991853] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.681 [2024-07-26 11:17:06.991860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.681 [2024-07-26 11:17:06.994688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.681 [2024-07-26 11:17:07.003876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.681 [2024-07-26 11:17:07.004614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.681 [2024-07-26 11:17:07.004631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.681 [2024-07-26 11:17:07.004638] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.681 [2024-07-26 11:17:07.004816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.681 [2024-07-26 11:17:07.004995] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.681 [2024-07-26 11:17:07.005005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.681 [2024-07-26 11:17:07.005011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.681 [2024-07-26 11:17:07.007874] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.682 [2024-07-26 11:17:07.017060] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.682 [2024-07-26 11:17:07.017659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.682 [2024-07-26 11:17:07.017676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.682 [2024-07-26 11:17:07.017683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.682 [2024-07-26 11:17:07.017860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.682 [2024-07-26 11:17:07.018037] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.682 [2024-07-26 11:17:07.018053] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.682 [2024-07-26 11:17:07.018060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.682 [2024-07-26 11:17:07.020884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.682 [2024-07-26 11:17:07.030231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.682 [2024-07-26 11:17:07.030959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.682 [2024-07-26 11:17:07.030976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.682 [2024-07-26 11:17:07.030984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.682 [2024-07-26 11:17:07.031166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.682 [2024-07-26 11:17:07.031345] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.682 [2024-07-26 11:17:07.031357] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.682 [2024-07-26 11:17:07.031364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.682 [2024-07-26 11:17:07.034195] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.682 [2024-07-26 11:17:07.043410] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.682 [2024-07-26 11:17:07.044144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.682 [2024-07-26 11:17:07.044162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.682 [2024-07-26 11:17:07.044169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.682 [2024-07-26 11:17:07.044346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.682 [2024-07-26 11:17:07.044524] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.682 [2024-07-26 11:17:07.044534] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.682 [2024-07-26 11:17:07.044541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.682 [2024-07-26 11:17:07.047372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.682 [2024-07-26 11:17:07.056563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.682 [2024-07-26 11:17:07.057229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.682 [2024-07-26 11:17:07.057248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.682 [2024-07-26 11:17:07.057255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.682 [2024-07-26 11:17:07.057433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.682 [2024-07-26 11:17:07.057612] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.682 [2024-07-26 11:17:07.057623] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.682 [2024-07-26 11:17:07.057629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.682 [2024-07-26 11:17:07.060460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.682 [2024-07-26 11:17:07.069651] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.682 [2024-07-26 11:17:07.070388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.682 [2024-07-26 11:17:07.070405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.682 [2024-07-26 11:17:07.070413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.682 [2024-07-26 11:17:07.070590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.682 [2024-07-26 11:17:07.070769] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.682 [2024-07-26 11:17:07.070779] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.682 [2024-07-26 11:17:07.070786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.682 [2024-07-26 11:17:07.073626] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.682 [2024-07-26 11:17:07.082810] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.682 [2024-07-26 11:17:07.083524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.682 [2024-07-26 11:17:07.083541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.682 [2024-07-26 11:17:07.083548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.682 [2024-07-26 11:17:07.083726] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.682 [2024-07-26 11:17:07.083904] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.682 [2024-07-26 11:17:07.083914] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.682 [2024-07-26 11:17:07.083921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.682 [2024-07-26 11:17:07.086751] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.682 [2024-07-26 11:17:07.095940] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.682 [2024-07-26 11:17:07.096664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.682 [2024-07-26 11:17:07.096682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.682 [2024-07-26 11:17:07.096690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.682 [2024-07-26 11:17:07.096867] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.682 [2024-07-26 11:17:07.097051] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.682 [2024-07-26 11:17:07.097061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.682 [2024-07-26 11:17:07.097067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.682 [2024-07-26 11:17:07.099890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.682 [2024-07-26 11:17:07.109108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.682 [2024-07-26 11:17:07.109720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.682 [2024-07-26 11:17:07.109737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.682 [2024-07-26 11:17:07.109745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.682 [2024-07-26 11:17:07.109928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.682 [2024-07-26 11:17:07.110118] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.682 [2024-07-26 11:17:07.110129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.682 [2024-07-26 11:17:07.110135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.682 [2024-07-26 11:17:07.113008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.682 [2024-07-26 11:17:07.122267] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.682 [2024-07-26 11:17:07.122925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.682 [2024-07-26 11:17:07.122942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.682 [2024-07-26 11:17:07.122949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.682 [2024-07-26 11:17:07.123134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.682 [2024-07-26 11:17:07.123313] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.682 [2024-07-26 11:17:07.123324] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.682 [2024-07-26 11:17:07.123330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.682 [2024-07-26 11:17:07.126158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.683 [2024-07-26 11:17:07.135341] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.683 [2024-07-26 11:17:07.136104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.683 [2024-07-26 11:17:07.136146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.683 [2024-07-26 11:17:07.136169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.683 [2024-07-26 11:17:07.136540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.683 [2024-07-26 11:17:07.136719] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.683 [2024-07-26 11:17:07.136729] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.683 [2024-07-26 11:17:07.136735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.683 [2024-07-26 11:17:07.139565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.683 [2024-07-26 11:17:07.148423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.683 [2024-07-26 11:17:07.149078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.683 [2024-07-26 11:17:07.149121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.683 [2024-07-26 11:17:07.149144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.683 [2024-07-26 11:17:07.149722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.683 [2024-07-26 11:17:07.150008] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.683 [2024-07-26 11:17:07.150018] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.683 [2024-07-26 11:17:07.150024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.683 [2024-07-26 11:17:07.152854] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.683 [2024-07-26 11:17:07.161424] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.683 [2024-07-26 11:17:07.162088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.683 [2024-07-26 11:17:07.162132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.683 [2024-07-26 11:17:07.162154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.683 [2024-07-26 11:17:07.162574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.683 [2024-07-26 11:17:07.162748] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.683 [2024-07-26 11:17:07.162758] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.683 [2024-07-26 11:17:07.162768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.683 [2024-07-26 11:17:07.165577] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.683 [2024-07-26 11:17:07.174483] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.945 [2024-07-26 11:17:07.175198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.945 [2024-07-26 11:17:07.175243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.945 [2024-07-26 11:17:07.175265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.945 [2024-07-26 11:17:07.175709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.945 [2024-07-26 11:17:07.175899] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.945 [2024-07-26 11:17:07.175909] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.945 [2024-07-26 11:17:07.175916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.945 [2024-07-26 11:17:07.178745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.945 [2024-07-26 11:17:07.187483] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.945 [2024-07-26 11:17:07.188237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.945 [2024-07-26 11:17:07.188282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.945 [2024-07-26 11:17:07.188305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.945 [2024-07-26 11:17:07.188885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.945 [2024-07-26 11:17:07.189444] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.945 [2024-07-26 11:17:07.189454] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.945 [2024-07-26 11:17:07.189460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.945 [2024-07-26 11:17:07.192058] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.945 [2024-07-26 11:17:07.200389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.945 [2024-07-26 11:17:07.201119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.945 [2024-07-26 11:17:07.201163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.945 [2024-07-26 11:17:07.201186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.945 [2024-07-26 11:17:07.201590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.945 [2024-07-26 11:17:07.201755] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.945 [2024-07-26 11:17:07.201764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.945 [2024-07-26 11:17:07.201770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.945 [2024-07-26 11:17:07.204395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.945 [2024-07-26 11:17:07.213365] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.945 [2024-07-26 11:17:07.214094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.945 [2024-07-26 11:17:07.214137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.945 [2024-07-26 11:17:07.214160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.945 [2024-07-26 11:17:07.214625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.945 [2024-07-26 11:17:07.214789] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.945 [2024-07-26 11:17:07.214798] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.945 [2024-07-26 11:17:07.214804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.945 [2024-07-26 11:17:07.217494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.945 [2024-07-26 11:17:07.226216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.945 [2024-07-26 11:17:07.226914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.945 [2024-07-26 11:17:07.226931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.945 [2024-07-26 11:17:07.226938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.945 [2024-07-26 11:17:07.227125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.945 [2024-07-26 11:17:07.227298] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.945 [2024-07-26 11:17:07.227308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.945 [2024-07-26 11:17:07.227315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.945 [2024-07-26 11:17:07.230034] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.945 [2024-07-26 11:17:07.239253] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.945 [2024-07-26 11:17:07.239987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.945 [2024-07-26 11:17:07.240029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.945 [2024-07-26 11:17:07.240067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.945 [2024-07-26 11:17:07.240597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.945 [2024-07-26 11:17:07.240771] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.945 [2024-07-26 11:17:07.240781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.945 [2024-07-26 11:17:07.240787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.945 [2024-07-26 11:17:07.243419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.945 [2024-07-26 11:17:07.252189] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.945 [2024-07-26 11:17:07.252920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.945 [2024-07-26 11:17:07.252963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.945 [2024-07-26 11:17:07.252985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.945 [2024-07-26 11:17:07.253832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.945 [2024-07-26 11:17:07.254054] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.945 [2024-07-26 11:17:07.254066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.945 [2024-07-26 11:17:07.254073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.945 [2024-07-26 11:17:07.256690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.945 [2024-07-26 11:17:07.264994] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.945 [2024-07-26 11:17:07.265724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.945 [2024-07-26 11:17:07.265769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.945 [2024-07-26 11:17:07.265791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.945 [2024-07-26 11:17:07.266383] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.945 [2024-07-26 11:17:07.266580] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.946 [2024-07-26 11:17:07.266602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.946 [2024-07-26 11:17:07.266608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.946 [2024-07-26 11:17:07.270357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.946 [2024-07-26 11:17:07.278593] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.946 [2024-07-26 11:17:07.279215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.946 [2024-07-26 11:17:07.279261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.946 [2024-07-26 11:17:07.279283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.946 [2024-07-26 11:17:07.279814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.946 [2024-07-26 11:17:07.279983] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.946 [2024-07-26 11:17:07.279992] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.946 [2024-07-26 11:17:07.279998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.946 [2024-07-26 11:17:07.282729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.946 [2024-07-26 11:17:07.291486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.946 [2024-07-26 11:17:07.292193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.946 [2024-07-26 11:17:07.292237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.946 [2024-07-26 11:17:07.292259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.946 [2024-07-26 11:17:07.292542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.946 [2024-07-26 11:17:07.292706] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.946 [2024-07-26 11:17:07.292715] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.946 [2024-07-26 11:17:07.292721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.946 [2024-07-26 11:17:07.295415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.946 [2024-07-26 11:17:07.304290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.946 [2024-07-26 11:17:07.305025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.946 [2024-07-26 11:17:07.305080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.946 [2024-07-26 11:17:07.305102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.946 [2024-07-26 11:17:07.305562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.946 [2024-07-26 11:17:07.305726] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.946 [2024-07-26 11:17:07.305736] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.946 [2024-07-26 11:17:07.305742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.946 [2024-07-26 11:17:07.308430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.946 [2024-07-26 11:17:07.317095] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.946 [2024-07-26 11:17:07.317752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.946 [2024-07-26 11:17:07.317794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.946 [2024-07-26 11:17:07.317817] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.946 [2024-07-26 11:17:07.318411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.946 [2024-07-26 11:17:07.318839] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.946 [2024-07-26 11:17:07.318848] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.946 [2024-07-26 11:17:07.318855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.946 [2024-07-26 11:17:07.321478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.946 [2024-07-26 11:17:07.329889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.946 [2024-07-26 11:17:07.330598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.946 [2024-07-26 11:17:07.330641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.946 [2024-07-26 11:17:07.330663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.946 [2024-07-26 11:17:07.331255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.946 [2024-07-26 11:17:07.331837] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.946 [2024-07-26 11:17:07.331846] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.946 [2024-07-26 11:17:07.331853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.946 [2024-07-26 11:17:07.334480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.946 [2024-07-26 11:17:07.342685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.946 [2024-07-26 11:17:07.343321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.946 [2024-07-26 11:17:07.343337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.946 [2024-07-26 11:17:07.343347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.946 [2024-07-26 11:17:07.343510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.946 [2024-07-26 11:17:07.343673] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.946 [2024-07-26 11:17:07.343683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.946 [2024-07-26 11:17:07.343688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.946 [2024-07-26 11:17:07.346379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.946 [2024-07-26 11:17:07.355594] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.946 [2024-07-26 11:17:07.356306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.946 [2024-07-26 11:17:07.356349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.946 [2024-07-26 11:17:07.356370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.946 [2024-07-26 11:17:07.356892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.946 [2024-07-26 11:17:07.357062] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.946 [2024-07-26 11:17:07.357071] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.946 [2024-07-26 11:17:07.357078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.946 [2024-07-26 11:17:07.359760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.946 [2024-07-26 11:17:07.368429] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.946 [2024-07-26 11:17:07.369129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.946 [2024-07-26 11:17:07.369172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.946 [2024-07-26 11:17:07.369195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.946 [2024-07-26 11:17:07.369775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.946 [2024-07-26 11:17:07.370162] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.946 [2024-07-26 11:17:07.370172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.946 [2024-07-26 11:17:07.370179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.946 [2024-07-26 11:17:07.372848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.946 [2024-07-26 11:17:07.381363] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.946 [2024-07-26 11:17:07.382085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.946 [2024-07-26 11:17:07.382135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.946 [2024-07-26 11:17:07.382142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.946 [2024-07-26 11:17:07.382305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.946 [2024-07-26 11:17:07.382468] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.946 [2024-07-26 11:17:07.382480] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.946 [2024-07-26 11:17:07.382486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.946 [2024-07-26 11:17:07.385079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.946 [2024-07-26 11:17:07.394460] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.946 [2024-07-26 11:17:07.395196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.946 [2024-07-26 11:17:07.395242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.946 [2024-07-26 11:17:07.395264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.946 [2024-07-26 11:17:07.395844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.946 [2024-07-26 11:17:07.396190] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.947 [2024-07-26 11:17:07.396200] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.947 [2024-07-26 11:17:07.396207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.947 [2024-07-26 11:17:07.398870] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.947 [2024-07-26 11:17:07.407382] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.947 [2024-07-26 11:17:07.408112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.947 [2024-07-26 11:17:07.408155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.947 [2024-07-26 11:17:07.408177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.947 [2024-07-26 11:17:07.408756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.947 [2024-07-26 11:17:07.409350] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.947 [2024-07-26 11:17:07.409389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.947 [2024-07-26 11:17:07.409396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.947 [2024-07-26 11:17:07.412104] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.947 [2024-07-26 11:17:07.420308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.947 [2024-07-26 11:17:07.421039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.947 [2024-07-26 11:17:07.421095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.947 [2024-07-26 11:17:07.421118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.947 [2024-07-26 11:17:07.421370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.947 [2024-07-26 11:17:07.421534] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.947 [2024-07-26 11:17:07.421543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.947 [2024-07-26 11:17:07.421550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.947 [2024-07-26 11:17:07.424240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:47.947 [2024-07-26 11:17:07.433120] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.947 [2024-07-26 11:17:07.433846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.947 [2024-07-26 11:17:07.433889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:47.947 [2024-07-26 11:17:07.433911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:47.947 [2024-07-26 11:17:07.434504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:47.947 [2024-07-26 11:17:07.435070] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:47.947 [2024-07-26 11:17:07.435080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:47.947 [2024-07-26 11:17:07.435087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.947 [2024-07-26 11:17:07.437975] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.208 [2024-07-26 11:17:07.446168] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.208 [2024-07-26 11:17:07.446902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.208 [2024-07-26 11:17:07.446945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.208 [2024-07-26 11:17:07.446968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.208 [2024-07-26 11:17:07.447275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.208 [2024-07-26 11:17:07.447453] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.208 [2024-07-26 11:17:07.447462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.208 [2024-07-26 11:17:07.447469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.208 [2024-07-26 11:17:07.450063] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.208 [2024-07-26 11:17:07.459063] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.208 [2024-07-26 11:17:07.459777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.208 [2024-07-26 11:17:07.459821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.208 [2024-07-26 11:17:07.459843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.208 [2024-07-26 11:17:07.460100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.208 [2024-07-26 11:17:07.460274] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.208 [2024-07-26 11:17:07.460283] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.208 [2024-07-26 11:17:07.460290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.208 [2024-07-26 11:17:07.462943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.208 [2024-07-26 11:17:07.471911] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.208 [2024-07-26 11:17:07.472651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.208 [2024-07-26 11:17:07.472695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.208 [2024-07-26 11:17:07.472718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.208 [2024-07-26 11:17:07.473007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.208 [2024-07-26 11:17:07.473199] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.208 [2024-07-26 11:17:07.473209] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.208 [2024-07-26 11:17:07.473216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.208 [2024-07-26 11:17:07.475882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.208 [2024-07-26 11:17:07.484757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.208 [2024-07-26 11:17:07.485479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.208 [2024-07-26 11:17:07.485523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.208 [2024-07-26 11:17:07.485546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.208 [2024-07-26 11:17:07.485954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.208 [2024-07-26 11:17:07.486122] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.208 [2024-07-26 11:17:07.486132] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.208 [2024-07-26 11:17:07.486138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.208 [2024-07-26 11:17:07.488728] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.208 [2024-07-26 11:17:07.497612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.208 [2024-07-26 11:17:07.498330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.208 [2024-07-26 11:17:07.498374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.208 [2024-07-26 11:17:07.498396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.208 [2024-07-26 11:17:07.498831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.208 [2024-07-26 11:17:07.499093] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.209 [2024-07-26 11:17:07.499106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.209 [2024-07-26 11:17:07.499116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.209 [2024-07-26 11:17:07.503168] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.209 [2024-07-26 11:17:07.510902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.209 [2024-07-26 11:17:07.511639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.209 [2024-07-26 11:17:07.511683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.209 [2024-07-26 11:17:07.511705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.209 [2024-07-26 11:17:07.512299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.209 [2024-07-26 11:17:07.512581] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.209 [2024-07-26 11:17:07.512591] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.209 [2024-07-26 11:17:07.512602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.209 [2024-07-26 11:17:07.515299] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.209 [2024-07-26 11:17:07.523726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.209 [2024-07-26 11:17:07.524385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.209 [2024-07-26 11:17:07.524428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.209 [2024-07-26 11:17:07.524450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.209 [2024-07-26 11:17:07.525029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.209 [2024-07-26 11:17:07.525498] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.209 [2024-07-26 11:17:07.525508] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.209 [2024-07-26 11:17:07.525515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.209 [2024-07-26 11:17:07.528166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.209 [2024-07-26 11:17:07.536517] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.209 [2024-07-26 11:17:07.537171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.209 [2024-07-26 11:17:07.537214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.209 [2024-07-26 11:17:07.537236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.209 [2024-07-26 11:17:07.537581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.209 [2024-07-26 11:17:07.537744] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.209 [2024-07-26 11:17:07.537754] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.209 [2024-07-26 11:17:07.537760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.209 [2024-07-26 11:17:07.540450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.209 [2024-07-26 11:17:07.549415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.209 [2024-07-26 11:17:07.550121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.209 [2024-07-26 11:17:07.550164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.209 [2024-07-26 11:17:07.550186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.209 [2024-07-26 11:17:07.550569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.209 [2024-07-26 11:17:07.550733] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.209 [2024-07-26 11:17:07.550742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.209 [2024-07-26 11:17:07.550748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.209 [2024-07-26 11:17:07.553441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.209 [2024-07-26 11:17:07.562286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.209 [2024-07-26 11:17:07.562937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.209 [2024-07-26 11:17:07.562980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.209 [2024-07-26 11:17:07.563002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.209 [2024-07-26 11:17:07.563403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.209 [2024-07-26 11:17:07.563577] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.209 [2024-07-26 11:17:07.563587] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.209 [2024-07-26 11:17:07.563593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.209 [2024-07-26 11:17:07.566236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.209 [2024-07-26 11:17:07.575213] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.209 [2024-07-26 11:17:07.575941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.209 [2024-07-26 11:17:07.575985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.209 [2024-07-26 11:17:07.576007] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.209 [2024-07-26 11:17:07.576464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.209 [2024-07-26 11:17:07.576638] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.209 [2024-07-26 11:17:07.576647] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.209 [2024-07-26 11:17:07.576654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.209 [2024-07-26 11:17:07.579291] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.209 [2024-07-26 11:17:07.588111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.209 [2024-07-26 11:17:07.588771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.209 [2024-07-26 11:17:07.588812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.209 [2024-07-26 11:17:07.588833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.209 [2024-07-26 11:17:07.589246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.209 [2024-07-26 11:17:07.589420] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.209 [2024-07-26 11:17:07.589430] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.209 [2024-07-26 11:17:07.589437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.209 [2024-07-26 11:17:07.592094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.209 [2024-07-26 11:17:07.600964] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.209 [2024-07-26 11:17:07.601696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.209 [2024-07-26 11:17:07.601739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.209 [2024-07-26 11:17:07.601762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.209 [2024-07-26 11:17:07.602234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.209 [2024-07-26 11:17:07.602415] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.209 [2024-07-26 11:17:07.602425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.209 [2024-07-26 11:17:07.602432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.209 [2024-07-26 11:17:07.605083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.209 [2024-07-26 11:17:07.613799] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.209 [2024-07-26 11:17:07.614514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.209 [2024-07-26 11:17:07.614531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.209 [2024-07-26 11:17:07.614537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.209 [2024-07-26 11:17:07.614700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.209 [2024-07-26 11:17:07.614864] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.209 [2024-07-26 11:17:07.614873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.209 [2024-07-26 11:17:07.614879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.209 [2024-07-26 11:17:07.617568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.209 [2024-07-26 11:17:07.626655] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.209 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1605489 Killed "${NVMF_APP[@]}" "$@" 00:28:48.209 11:17:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:48.209 [2024-07-26 11:17:07.627383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.209 [2024-07-26 11:17:07.627401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.209 [2024-07-26 11:17:07.627409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.209 [2024-07-26 11:17:07.627586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.210 11:17:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:48.210 [2024-07-26 11:17:07.627764] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.210 [2024-07-26 11:17:07.627774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.210 [2024-07-26 11:17:07.627781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.210 11:17:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:48.210 11:17:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:48.210 11:17:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:48.210 [2024-07-26 11:17:07.630610] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.210 11:17:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1606897 00:28:48.210 11:17:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1606897 00:28:48.210 11:17:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:48.210 11:17:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1606897 ']' 00:28:48.210 11:17:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:48.210 11:17:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:48.210 11:17:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:48.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:48.210 11:17:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:48.210 11:17:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:48.210 [2024-07-26 11:17:07.639799] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.210 [2024-07-26 11:17:07.640508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.210 [2024-07-26 11:17:07.640525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.210 [2024-07-26 11:17:07.640533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.210 [2024-07-26 11:17:07.640709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.210 [2024-07-26 11:17:07.640887] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.210 [2024-07-26 11:17:07.640897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.210 [2024-07-26 11:17:07.640904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.210 [2024-07-26 11:17:07.643735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.210 [2024-07-26 11:17:07.652920] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.210 [2024-07-26 11:17:07.653642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.210 [2024-07-26 11:17:07.653660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.210 [2024-07-26 11:17:07.653667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.210 [2024-07-26 11:17:07.653845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.210 [2024-07-26 11:17:07.654023] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.210 [2024-07-26 11:17:07.654032] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.210 [2024-07-26 11:17:07.654040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.210 [2024-07-26 11:17:07.656873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.210 [2024-07-26 11:17:07.666062] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.210 [2024-07-26 11:17:07.666692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.210 [2024-07-26 11:17:07.666709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.210 [2024-07-26 11:17:07.666716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.210 [2024-07-26 11:17:07.666893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.210 [2024-07-26 11:17:07.667076] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.210 [2024-07-26 11:17:07.667086] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.210 [2024-07-26 11:17:07.667095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.210 [2024-07-26 11:17:07.669926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.210 [2024-07-26 11:17:07.679200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.210 [2024-07-26 11:17:07.679906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.210 [2024-07-26 11:17:07.679923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.210 [2024-07-26 11:17:07.679931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.210 [2024-07-26 11:17:07.680124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.210 [2024-07-26 11:17:07.680304] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.210 [2024-07-26 11:17:07.680314] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.210 [2024-07-26 11:17:07.680321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.210 [2024-07-26 11:17:07.682321] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:48.210 [2024-07-26 11:17:07.682364] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:48.210 [2024-07-26 11:17:07.683121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.210 [2024-07-26 11:17:07.692239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.210 [2024-07-26 11:17:07.692922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.210 [2024-07-26 11:17:07.692939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.210 [2024-07-26 11:17:07.692947] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.210 [2024-07-26 11:17:07.693123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.210 [2024-07-26 11:17:07.693296] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.210 [2024-07-26 11:17:07.693307] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.210 [2024-07-26 11:17:07.693313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.210 [2024-07-26 11:17:07.696150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.471 [2024-07-26 11:17:07.705343] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.472 [2024-07-26 11:17:07.706068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.472 [2024-07-26 11:17:07.706086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.472 [2024-07-26 11:17:07.706093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.472 [2024-07-26 11:17:07.706266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.472 [2024-07-26 11:17:07.706440] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.472 [2024-07-26 11:17:07.706449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.472 [2024-07-26 11:17:07.706458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.472 EAL: No free 2048 kB hugepages reported on node 1 00:28:48.472 [2024-07-26 11:17:07.709348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.472 [2024-07-26 11:17:07.718516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.472 [2024-07-26 11:17:07.719235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.472 [2024-07-26 11:17:07.719253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.472 [2024-07-26 11:17:07.719261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.472 [2024-07-26 11:17:07.719438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.472 [2024-07-26 11:17:07.719616] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.472 [2024-07-26 11:17:07.719626] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.472 [2024-07-26 11:17:07.719633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.472 [2024-07-26 11:17:07.722464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.472 [2024-07-26 11:17:07.731649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.472 [2024-07-26 11:17:07.732379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.472 [2024-07-26 11:17:07.732396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.472 [2024-07-26 11:17:07.732404] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.472 [2024-07-26 11:17:07.732581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.472 [2024-07-26 11:17:07.732760] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.472 [2024-07-26 11:17:07.732770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.472 [2024-07-26 11:17:07.732776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.472 [2024-07-26 11:17:07.735578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.472 [2024-07-26 11:17:07.740403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:48.472 [2024-07-26 11:17:07.744678] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.472 [2024-07-26 11:17:07.745337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.472 [2024-07-26 11:17:07.745355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.472 [2024-07-26 11:17:07.745363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.472 [2024-07-26 11:17:07.745536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.472 [2024-07-26 11:17:07.745709] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.472 [2024-07-26 11:17:07.745718] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.472 [2024-07-26 11:17:07.745725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.472 [2024-07-26 11:17:07.748536] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.472 [2024-07-26 11:17:07.757739] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.472 [2024-07-26 11:17:07.758467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.472 [2024-07-26 11:17:07.758487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.472 [2024-07-26 11:17:07.758494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.472 [2024-07-26 11:17:07.758666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.472 [2024-07-26 11:17:07.758837] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.472 [2024-07-26 11:17:07.758844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.472 [2024-07-26 11:17:07.758850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.472 [2024-07-26 11:17:07.761693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.472 [2024-07-26 11:17:07.771019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.472 [2024-07-26 11:17:07.771740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.472 [2024-07-26 11:17:07.771758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.472 [2024-07-26 11:17:07.771765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.472 [2024-07-26 11:17:07.771938] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.472 [2024-07-26 11:17:07.772117] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.472 [2024-07-26 11:17:07.772127] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.472 [2024-07-26 11:17:07.772134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.472 [2024-07-26 11:17:07.774962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.472 [2024-07-26 11:17:07.784079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.472 [2024-07-26 11:17:07.784846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.472 [2024-07-26 11:17:07.784866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.472 [2024-07-26 11:17:07.784874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.472 [2024-07-26 11:17:07.785053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.472 [2024-07-26 11:17:07.785233] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.472 [2024-07-26 11:17:07.785243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.472 [2024-07-26 11:17:07.785249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.472 [2024-07-26 11:17:07.788045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.472 [2024-07-26 11:17:07.797118] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.472 [2024-07-26 11:17:07.797782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.472 [2024-07-26 11:17:07.797800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.472 [2024-07-26 11:17:07.797807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.472 [2024-07-26 11:17:07.797980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.472 [2024-07-26 11:17:07.798164] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.472 [2024-07-26 11:17:07.798174] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.472 [2024-07-26 11:17:07.798181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.472 [2024-07-26 11:17:07.800995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.472 [2024-07-26 11:17:07.810313] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.472 [2024-07-26 11:17:07.811037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.472 [2024-07-26 11:17:07.811059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.472 [2024-07-26 11:17:07.811066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.472 [2024-07-26 11:17:07.811245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.472 [2024-07-26 11:17:07.811424] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.472 [2024-07-26 11:17:07.811433] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.472 [2024-07-26 11:17:07.811440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.472 [2024-07-26 11:17:07.814269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.472 [2024-07-26 11:17:07.822053] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:48.472 [2024-07-26 11:17:07.822079] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:48.472 [2024-07-26 11:17:07.822087] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:48.472 [2024-07-26 11:17:07.822093] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:48.472 [2024-07-26 11:17:07.822098] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:48.472 [2024-07-26 11:17:07.822138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:48.472 [2024-07-26 11:17:07.822224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:48.472 [2024-07-26 11:17:07.822226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:48.472 [2024-07-26 11:17:07.823463] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.473 [2024-07-26 11:17:07.824095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.473 [2024-07-26 11:17:07.824112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.473 [2024-07-26 11:17:07.824121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.473 [2024-07-26 11:17:07.824300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.473 [2024-07-26 11:17:07.824479] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.473 [2024-07-26 11:17:07.824489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.473 [2024-07-26 11:17:07.824496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.473 [2024-07-26 11:17:07.827327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.473 [2024-07-26 11:17:07.836526] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.473 [2024-07-26 11:17:07.837259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.473 [2024-07-26 11:17:07.837284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.473 [2024-07-26 11:17:07.837292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.473 [2024-07-26 11:17:07.837471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.473 [2024-07-26 11:17:07.837650] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.473 [2024-07-26 11:17:07.837660] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.473 [2024-07-26 11:17:07.837667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.473 [2024-07-26 11:17:07.840499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.473 [2024-07-26 11:17:07.849735] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.473 [2024-07-26 11:17:07.850400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.473 [2024-07-26 11:17:07.850420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.473 [2024-07-26 11:17:07.850428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.473 [2024-07-26 11:17:07.850607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.473 [2024-07-26 11:17:07.850785] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.473 [2024-07-26 11:17:07.850795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.473 [2024-07-26 11:17:07.850801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.473 [2024-07-26 11:17:07.853630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.473 [2024-07-26 11:17:07.862824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.473 [2024-07-26 11:17:07.863594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.473 [2024-07-26 11:17:07.863615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.473 [2024-07-26 11:17:07.863623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.473 [2024-07-26 11:17:07.863801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.473 [2024-07-26 11:17:07.863980] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.473 [2024-07-26 11:17:07.863990] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.473 [2024-07-26 11:17:07.863997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.473 [2024-07-26 11:17:07.866828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.473 [2024-07-26 11:17:07.876038] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.473 [2024-07-26 11:17:07.876802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.473 [2024-07-26 11:17:07.876822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.473 [2024-07-26 11:17:07.876830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.473 [2024-07-26 11:17:07.877009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.473 [2024-07-26 11:17:07.877198] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.473 [2024-07-26 11:17:07.877209] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.473 [2024-07-26 11:17:07.877216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.473 [2024-07-26 11:17:07.880045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.473 [2024-07-26 11:17:07.889231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.473 [2024-07-26 11:17:07.889905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.473 [2024-07-26 11:17:07.889921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.473 [2024-07-26 11:17:07.889929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.473 [2024-07-26 11:17:07.890110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.473 [2024-07-26 11:17:07.890289] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.473 [2024-07-26 11:17:07.890299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.473 [2024-07-26 11:17:07.890306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.473 [2024-07-26 11:17:07.893143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.473 [2024-07-26 11:17:07.902322] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.473 [2024-07-26 11:17:07.902978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.473 [2024-07-26 11:17:07.902995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.473 [2024-07-26 11:17:07.903002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.473 [2024-07-26 11:17:07.903184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.473 [2024-07-26 11:17:07.903363] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.473 [2024-07-26 11:17:07.903373] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.473 [2024-07-26 11:17:07.903379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.473 [2024-07-26 11:17:07.906205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.473 [2024-07-26 11:17:07.915386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.473 [2024-07-26 11:17:07.916116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.473 [2024-07-26 11:17:07.916133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.473 [2024-07-26 11:17:07.916140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.473 [2024-07-26 11:17:07.916325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.473 [2024-07-26 11:17:07.916498] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.473 [2024-07-26 11:17:07.916507] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.473 [2024-07-26 11:17:07.916514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.473 [2024-07-26 11:17:07.919347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.473 [2024-07-26 11:17:07.928525] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.473 [2024-07-26 11:17:07.929267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.473 [2024-07-26 11:17:07.929284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.473 [2024-07-26 11:17:07.929291] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.473 [2024-07-26 11:17:07.929468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.473 [2024-07-26 11:17:07.929646] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.473 [2024-07-26 11:17:07.929656] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.473 [2024-07-26 11:17:07.929663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.473 [2024-07-26 11:17:07.932490] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.473 [2024-07-26 11:17:07.941667] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.473 [2024-07-26 11:17:07.942396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.473 [2024-07-26 11:17:07.942413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.473 [2024-07-26 11:17:07.942420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.473 [2024-07-26 11:17:07.942597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.473 [2024-07-26 11:17:07.942775] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.474 [2024-07-26 11:17:07.942783] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.474 [2024-07-26 11:17:07.942790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.474 [2024-07-26 11:17:07.945628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.474 [2024-07-26 11:17:07.954831] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.474 [2024-07-26 11:17:07.955490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.474 [2024-07-26 11:17:07.955507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.474 [2024-07-26 11:17:07.955515] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.474 [2024-07-26 11:17:07.955691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.474 [2024-07-26 11:17:07.955869] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.474 [2024-07-26 11:17:07.955879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.474 [2024-07-26 11:17:07.955886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.474 [2024-07-26 11:17:07.958711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.735 [2024-07-26 11:17:07.967932] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.735 [2024-07-26 11:17:07.968649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.735 [2024-07-26 11:17:07.968666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.735 [2024-07-26 11:17:07.968678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.735 [2024-07-26 11:17:07.968856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.735 [2024-07-26 11:17:07.969035] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.735 [2024-07-26 11:17:07.969049] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.735 [2024-07-26 11:17:07.969056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.735 [2024-07-26 11:17:07.971881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.735 [2024-07-26 11:17:07.981066] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.735 [2024-07-26 11:17:07.981803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.735 [2024-07-26 11:17:07.981820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.735 [2024-07-26 11:17:07.981827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.735 [2024-07-26 11:17:07.982004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.735 [2024-07-26 11:17:07.982186] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.735 [2024-07-26 11:17:07.982195] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.735 [2024-07-26 11:17:07.982202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.735 [2024-07-26 11:17:07.985023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.735 [2024-07-26 11:17:07.994208] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.735 [2024-07-26 11:17:07.994944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.735 [2024-07-26 11:17:07.994960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.735 [2024-07-26 11:17:07.994967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.735 [2024-07-26 11:17:07.995150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.735 [2024-07-26 11:17:07.995328] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.735 [2024-07-26 11:17:07.995338] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.735 [2024-07-26 11:17:07.995344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.735 [2024-07-26 11:17:07.998172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.735 [2024-07-26 11:17:08.007345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.735 [2024-07-26 11:17:08.008082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.735 [2024-07-26 11:17:08.008099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.735 [2024-07-26 11:17:08.008107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.735 [2024-07-26 11:17:08.008284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.735 [2024-07-26 11:17:08.008461] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.735 [2024-07-26 11:17:08.008472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.735 [2024-07-26 11:17:08.008479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.735 [2024-07-26 11:17:08.011309] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.735 [2024-07-26 11:17:08.020486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.735 [2024-07-26 11:17:08.021223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.735 [2024-07-26 11:17:08.021240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.735 [2024-07-26 11:17:08.021247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.735 [2024-07-26 11:17:08.021424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.735 [2024-07-26 11:17:08.021602] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.735 [2024-07-26 11:17:08.021611] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.735 [2024-07-26 11:17:08.021618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.735 [2024-07-26 11:17:08.024445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.735 [2024-07-26 11:17:08.033626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.736 [2024-07-26 11:17:08.034357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.736 [2024-07-26 11:17:08.034374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.736 [2024-07-26 11:17:08.034381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.736 [2024-07-26 11:17:08.034558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.736 [2024-07-26 11:17:08.034735] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.736 [2024-07-26 11:17:08.034745] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.736 [2024-07-26 11:17:08.034751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.736 [2024-07-26 11:17:08.037580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.736 [2024-07-26 11:17:08.046757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.736 [2024-07-26 11:17:08.047418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.736 [2024-07-26 11:17:08.047435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.736 [2024-07-26 11:17:08.047442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.736 [2024-07-26 11:17:08.047619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.736 [2024-07-26 11:17:08.047796] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.736 [2024-07-26 11:17:08.047806] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.736 [2024-07-26 11:17:08.047813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.736 [2024-07-26 11:17:08.050641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.736 [2024-07-26 11:17:08.059817] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.736 [2024-07-26 11:17:08.060550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.736 [2024-07-26 11:17:08.060567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.736 [2024-07-26 11:17:08.060574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.736 [2024-07-26 11:17:08.060751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.736 [2024-07-26 11:17:08.060930] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.736 [2024-07-26 11:17:08.060940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.736 [2024-07-26 11:17:08.060947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.736 [2024-07-26 11:17:08.063773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.736 [2024-07-26 11:17:08.072952] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.736 [2024-07-26 11:17:08.073700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.736 [2024-07-26 11:17:08.073717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.736 [2024-07-26 11:17:08.073724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.736 [2024-07-26 11:17:08.073901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.736 [2024-07-26 11:17:08.074084] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.736 [2024-07-26 11:17:08.074093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.736 [2024-07-26 11:17:08.074100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.736 [2024-07-26 11:17:08.076927] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.736 [2024-07-26 11:17:08.086116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.736 [2024-07-26 11:17:08.086827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.736 [2024-07-26 11:17:08.086844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.736 [2024-07-26 11:17:08.086851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.736 [2024-07-26 11:17:08.087028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.736 [2024-07-26 11:17:08.087211] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.736 [2024-07-26 11:17:08.087221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.736 [2024-07-26 11:17:08.087229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.736 [2024-07-26 11:17:08.090054] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.736 [2024-07-26 11:17:08.099237] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.736 [2024-07-26 11:17:08.099972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.736 [2024-07-26 11:17:08.099989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.736 [2024-07-26 11:17:08.099996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.736 [2024-07-26 11:17:08.100182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.736 [2024-07-26 11:17:08.100360] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.736 [2024-07-26 11:17:08.100370] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.736 [2024-07-26 11:17:08.100377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.736 [2024-07-26 11:17:08.103200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.736 [2024-07-26 11:17:08.112370] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.736 [2024-07-26 11:17:08.113107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.736 [2024-07-26 11:17:08.113125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.736 [2024-07-26 11:17:08.113132] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.736 [2024-07-26 11:17:08.113310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.736 [2024-07-26 11:17:08.113487] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.736 [2024-07-26 11:17:08.113497] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.736 [2024-07-26 11:17:08.113504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.736 [2024-07-26 11:17:08.116331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.736 [2024-07-26 11:17:08.125505] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.736 [2024-07-26 11:17:08.126256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.736 [2024-07-26 11:17:08.126273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.736 [2024-07-26 11:17:08.126280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.736 [2024-07-26 11:17:08.126457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.736 [2024-07-26 11:17:08.126634] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.736 [2024-07-26 11:17:08.126643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.736 [2024-07-26 11:17:08.126649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.736 [2024-07-26 11:17:08.129475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.736 [2024-07-26 11:17:08.138651] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.736 [2024-07-26 11:17:08.139366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.736 [2024-07-26 11:17:08.139383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.736 [2024-07-26 11:17:08.139391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.736 [2024-07-26 11:17:08.139568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.736 [2024-07-26 11:17:08.139745] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.736 [2024-07-26 11:17:08.139754] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.736 [2024-07-26 11:17:08.139764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.736 [2024-07-26 11:17:08.142594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.736 [2024-07-26 11:17:08.151776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.736 [2024-07-26 11:17:08.152507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.736 [2024-07-26 11:17:08.152525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.736 [2024-07-26 11:17:08.152533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.736 [2024-07-26 11:17:08.152710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.736 [2024-07-26 11:17:08.152889] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.736 [2024-07-26 11:17:08.152899] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.736 [2024-07-26 11:17:08.152905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.737 [2024-07-26 11:17:08.155737] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.737 [2024-07-26 11:17:08.164919] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.737 [2024-07-26 11:17:08.165517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.737 [2024-07-26 11:17:08.165535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.737 [2024-07-26 11:17:08.165543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.737 [2024-07-26 11:17:08.165720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.737 [2024-07-26 11:17:08.165899] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.737 [2024-07-26 11:17:08.165909] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.737 [2024-07-26 11:17:08.165917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.737 [2024-07-26 11:17:08.168750] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.737 [2024-07-26 11:17:08.178110] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.737 [2024-07-26 11:17:08.178770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.737 [2024-07-26 11:17:08.178787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.737 [2024-07-26 11:17:08.178795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.737 [2024-07-26 11:17:08.178972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.737 [2024-07-26 11:17:08.179158] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.737 [2024-07-26 11:17:08.179170] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.737 [2024-07-26 11:17:08.179178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.737 [2024-07-26 11:17:08.182008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.737 [2024-07-26 11:17:08.191203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.737 [2024-07-26 11:17:08.191936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.737 [2024-07-26 11:17:08.191956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.737 [2024-07-26 11:17:08.191964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.737 [2024-07-26 11:17:08.192146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.737 [2024-07-26 11:17:08.192324] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.737 [2024-07-26 11:17:08.192332] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.737 [2024-07-26 11:17:08.192339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.737 [2024-07-26 11:17:08.195217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.737 [2024-07-26 11:17:08.204350] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.737 [2024-07-26 11:17:08.205015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.737 [2024-07-26 11:17:08.205033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.737 [2024-07-26 11:17:08.205041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.737 [2024-07-26 11:17:08.205224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.737 [2024-07-26 11:17:08.205401] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.737 [2024-07-26 11:17:08.205411] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.737 [2024-07-26 11:17:08.205417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.737 [2024-07-26 11:17:08.208248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.737 [2024-07-26 11:17:08.217440] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.737 [2024-07-26 11:17:08.218153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.737 [2024-07-26 11:17:08.218171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.737 [2024-07-26 11:17:08.218179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.737 [2024-07-26 11:17:08.218356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.737 [2024-07-26 11:17:08.218535] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.737 [2024-07-26 11:17:08.218545] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.737 [2024-07-26 11:17:08.218551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.737 [2024-07-26 11:17:08.221384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.998 [2024-07-26 11:17:08.230603] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.998 [2024-07-26 11:17:08.231281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.998 [2024-07-26 11:17:08.231299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.998 [2024-07-26 11:17:08.231307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.998 [2024-07-26 11:17:08.231485] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.998 [2024-07-26 11:17:08.231668] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.998 [2024-07-26 11:17:08.231678] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.998 [2024-07-26 11:17:08.231685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.998 [2024-07-26 11:17:08.234517] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.998 [2024-07-26 11:17:08.243709] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.998 [2024-07-26 11:17:08.244491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.998 [2024-07-26 11:17:08.244509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.998 [2024-07-26 11:17:08.244516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.998 [2024-07-26 11:17:08.244694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.998 [2024-07-26 11:17:08.244872] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.998 [2024-07-26 11:17:08.244881] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.998 [2024-07-26 11:17:08.244888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.998 [2024-07-26 11:17:08.247721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.998 [2024-07-26 11:17:08.256776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.998 [2024-07-26 11:17:08.257541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.999 [2024-07-26 11:17:08.257559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.999 [2024-07-26 11:17:08.257567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.999 [2024-07-26 11:17:08.257745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.999 [2024-07-26 11:17:08.257924] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.999 [2024-07-26 11:17:08.257934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.999 [2024-07-26 11:17:08.257940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.999 [2024-07-26 11:17:08.260769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.999 [2024-07-26 11:17:08.269970] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.999 [2024-07-26 11:17:08.270545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.999 [2024-07-26 11:17:08.270563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.999 [2024-07-26 11:17:08.270571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.999 [2024-07-26 11:17:08.270749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.999 [2024-07-26 11:17:08.270928] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.999 [2024-07-26 11:17:08.270937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.999 [2024-07-26 11:17:08.270944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.999 [2024-07-26 11:17:08.273791] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.999 [2024-07-26 11:17:08.283161] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.999 [2024-07-26 11:17:08.283806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.999 [2024-07-26 11:17:08.283823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.999 [2024-07-26 11:17:08.283831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.999 [2024-07-26 11:17:08.284008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.999 [2024-07-26 11:17:08.284193] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.999 [2024-07-26 11:17:08.284204] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.999 [2024-07-26 11:17:08.284211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.999 [2024-07-26 11:17:08.287038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.999 [2024-07-26 11:17:08.296240] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.999 [2024-07-26 11:17:08.296869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.999 [2024-07-26 11:17:08.296887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.999 [2024-07-26 11:17:08.296895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.999 [2024-07-26 11:17:08.297079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.999 [2024-07-26 11:17:08.297257] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.999 [2024-07-26 11:17:08.297267] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.999 [2024-07-26 11:17:08.297274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.999 [2024-07-26 11:17:08.300104] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.999 [2024-07-26 11:17:08.309293] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.999 [2024-07-26 11:17:08.309946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.999 [2024-07-26 11:17:08.309964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.999 [2024-07-26 11:17:08.309971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.999 [2024-07-26 11:17:08.310152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.999 [2024-07-26 11:17:08.310331] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.999 [2024-07-26 11:17:08.310340] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.999 [2024-07-26 11:17:08.310346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.999 [2024-07-26 11:17:08.313175] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.999 [2024-07-26 11:17:08.322370] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.999 [2024-07-26 11:17:08.323025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.999 [2024-07-26 11:17:08.323047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.999 [2024-07-26 11:17:08.323060] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.999 [2024-07-26 11:17:08.323238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.999 [2024-07-26 11:17:08.323416] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.999 [2024-07-26 11:17:08.323426] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.999 [2024-07-26 11:17:08.323432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.999 [2024-07-26 11:17:08.326261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.999 [2024-07-26 11:17:08.335450] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.999 [2024-07-26 11:17:08.336214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.999 [2024-07-26 11:17:08.336231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.999 [2024-07-26 11:17:08.336240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.999 [2024-07-26 11:17:08.336417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.999 [2024-07-26 11:17:08.336594] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.999 [2024-07-26 11:17:08.336604] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.999 [2024-07-26 11:17:08.336611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.999 [2024-07-26 11:17:08.339446] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.999 [2024-07-26 11:17:08.348633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.999 [2024-07-26 11:17:08.349279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.999 [2024-07-26 11:17:08.349297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.999 [2024-07-26 11:17:08.349305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.999 [2024-07-26 11:17:08.349482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.999 [2024-07-26 11:17:08.349661] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.999 [2024-07-26 11:17:08.349671] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.999 [2024-07-26 11:17:08.349678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.999 [2024-07-26 11:17:08.352510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.999 [2024-07-26 11:17:08.361706] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.999 [2024-07-26 11:17:08.362317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.999 [2024-07-26 11:17:08.362335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.999 [2024-07-26 11:17:08.362343] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.999 [2024-07-26 11:17:08.362520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.999 [2024-07-26 11:17:08.362699] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.999 [2024-07-26 11:17:08.362712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.999 [2024-07-26 11:17:08.362720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.999 [2024-07-26 11:17:08.365552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.999 [2024-07-26 11:17:08.374762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.999 [2024-07-26 11:17:08.375524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.999 [2024-07-26 11:17:08.375542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:48.999 [2024-07-26 11:17:08.375550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:48.999 [2024-07-26 11:17:08.375728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:48.999 [2024-07-26 11:17:08.375907] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.999 [2024-07-26 11:17:08.375916] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.999 [2024-07-26 11:17:08.375923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.999 [2024-07-26 11:17:08.378750] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.999 [2024-07-26 11:17:08.387943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.999 [2024-07-26 11:17:08.388685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.999 [2024-07-26 11:17:08.388703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:49.000 [2024-07-26 11:17:08.388711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:49.000 [2024-07-26 11:17:08.388889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:49.000 [2024-07-26 11:17:08.389072] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.000 [2024-07-26 11:17:08.389083] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.000 [2024-07-26 11:17:08.389090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.000 [2024-07-26 11:17:08.391915] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.000 [2024-07-26 11:17:08.401117] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.000 [2024-07-26 11:17:08.401783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.000 [2024-07-26 11:17:08.401801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:49.000 [2024-07-26 11:17:08.401808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:49.000 [2024-07-26 11:17:08.401985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:49.000 [2024-07-26 11:17:08.402172] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.000 [2024-07-26 11:17:08.402182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.000 [2024-07-26 11:17:08.402189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.000 [2024-07-26 11:17:08.405013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.000 [2024-07-26 11:17:08.414204] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.000 [2024-07-26 11:17:08.414808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.000 [2024-07-26 11:17:08.414825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:49.000 [2024-07-26 11:17:08.414833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:49.000 [2024-07-26 11:17:08.415009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:49.000 [2024-07-26 11:17:08.415195] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.000 [2024-07-26 11:17:08.415205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.000 [2024-07-26 11:17:08.415212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.000 [2024-07-26 11:17:08.418216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.000 [2024-07-26 11:17:08.427409] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.000 [2024-07-26 11:17:08.428064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.000 [2024-07-26 11:17:08.428083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:49.000 [2024-07-26 11:17:08.428091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:49.000 [2024-07-26 11:17:08.428268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:49.000 [2024-07-26 11:17:08.428448] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.000 [2024-07-26 11:17:08.428458] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.000 [2024-07-26 11:17:08.428465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.000 [2024-07-26 11:17:08.431298] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.000 [2024-07-26 11:17:08.440492] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.000 [2024-07-26 11:17:08.441314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.000 [2024-07-26 11:17:08.441332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:49.000 [2024-07-26 11:17:08.441339] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:49.000 [2024-07-26 11:17:08.441516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:49.000 [2024-07-26 11:17:08.441695] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.000 [2024-07-26 11:17:08.441705] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.000 [2024-07-26 11:17:08.441713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.000 [2024-07-26 11:17:08.444544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.000 [2024-07-26 11:17:08.453559] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.000 [2024-07-26 11:17:08.453991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.000 [2024-07-26 11:17:08.454008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:49.000 [2024-07-26 11:17:08.454015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:49.000 [2024-07-26 11:17:08.454202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:49.000 [2024-07-26 11:17:08.454381] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.000 [2024-07-26 11:17:08.454391] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.000 [2024-07-26 11:17:08.454398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.000 [2024-07-26 11:17:08.457231] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.000 [2024-07-26 11:17:08.466601] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.000 [2024-07-26 11:17:08.467290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.000 [2024-07-26 11:17:08.467307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:49.000 [2024-07-26 11:17:08.467315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:49.000 [2024-07-26 11:17:08.467494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:49.000 [2024-07-26 11:17:08.467673] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.000 [2024-07-26 11:17:08.467683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.000 [2024-07-26 11:17:08.467691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.000 [2024-07-26 11:17:08.470528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.000 [2024-07-26 11:17:08.479731] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.000 [2024-07-26 11:17:08.480395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.000 [2024-07-26 11:17:08.480412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:49.000 [2024-07-26 11:17:08.480420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:49.000 [2024-07-26 11:17:08.480597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:49.000 [2024-07-26 11:17:08.480776] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.000 [2024-07-26 11:17:08.480786] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.000 [2024-07-26 11:17:08.480793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.000 [2024-07-26 11:17:08.483621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.261 [2024-07-26 11:17:08.492805] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.261 [2024-07-26 11:17:08.493491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.261 [2024-07-26 11:17:08.493509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:49.261 [2024-07-26 11:17:08.493517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:49.261 [2024-07-26 11:17:08.493694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:49.261 [2024-07-26 11:17:08.493879] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.261 [2024-07-26 11:17:08.493889] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.261 [2024-07-26 11:17:08.493900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.261 [2024-07-26 11:17:08.496731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.261 11:17:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:49.261 11:17:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:28:49.261 11:17:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:49.261 11:17:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:49.261 11:17:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:49.261 [2024-07-26 11:17:08.505918] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.261 [2024-07-26 11:17:08.506615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.261 [2024-07-26 11:17:08.506634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:49.261 [2024-07-26 11:17:08.506642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:49.261 [2024-07-26 11:17:08.506819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:49.261 [2024-07-26 11:17:08.506999] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.261 [2024-07-26 11:17:08.507009] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.261 [2024-07-26 11:17:08.507016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.261 [2024-07-26 11:17:08.509845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.261 [2024-07-26 11:17:08.519031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.261 [2024-07-26 11:17:08.519626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.261 [2024-07-26 11:17:08.519646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:49.261 [2024-07-26 11:17:08.519653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:49.261 [2024-07-26 11:17:08.519829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:49.261 [2024-07-26 11:17:08.520007] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.261 [2024-07-26 11:17:08.520016] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.261 [2024-07-26 11:17:08.520023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.261 [2024-07-26 11:17:08.522854] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.261 [2024-07-26 11:17:08.532215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.261 [2024-07-26 11:17:08.532829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.261 [2024-07-26 11:17:08.532846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:49.261 [2024-07-26 11:17:08.532854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:49.261 [2024-07-26 11:17:08.533031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:49.261 [2024-07-26 11:17:08.533214] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.261 [2024-07-26 11:17:08.533224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.261 [2024-07-26 11:17:08.533234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.261 [2024-07-26 11:17:08.536063] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.261 11:17:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:49.261 11:17:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:49.261 11:17:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.261 11:17:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:49.261 [2024-07-26 11:17:08.545416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.261 [2024-07-26 11:17:08.545721] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:49.261 [2024-07-26 11:17:08.546169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.261 [2024-07-26 11:17:08.546188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:49.261 [2024-07-26 11:17:08.546195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:49.261 [2024-07-26 11:17:08.546373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:49.261 [2024-07-26 11:17:08.546551] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.261 [2024-07-26 11:17:08.546561] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.261 [2024-07-26 11:17:08.546568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.261 [2024-07-26 11:17:08.549399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.261 [2024-07-26 11:17:08.558579] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.261 [2024-07-26 11:17:08.559287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.261 [2024-07-26 11:17:08.559304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:49.261 [2024-07-26 11:17:08.559312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:49.261 [2024-07-26 11:17:08.559490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:49.261 [2024-07-26 11:17:08.559667] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.261 [2024-07-26 11:17:08.559676] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.261 [2024-07-26 11:17:08.559683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.261 [2024-07-26 11:17:08.562512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.261 11:17:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.261 11:17:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:49.261 11:17:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.261 11:17:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:49.262 [2024-07-26 11:17:08.571698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.262 [2024-07-26 11:17:08.572456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.262 [2024-07-26 11:17:08.572473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:49.262 [2024-07-26 11:17:08.572481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:49.262 [2024-07-26 11:17:08.572661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:49.262 [2024-07-26 11:17:08.572839] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.262 [2024-07-26 11:17:08.572849] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.262 [2024-07-26 11:17:08.572856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.262 [2024-07-26 11:17:08.575696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.262 Malloc0 00:28:49.262 [2024-07-26 11:17:08.584888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.262 11:17:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.262 [2024-07-26 11:17:08.585635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.262 [2024-07-26 11:17:08.585653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:49.262 [2024-07-26 11:17:08.585660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:49.262 11:17:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:49.262 [2024-07-26 11:17:08.585837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:49.262 [2024-07-26 11:17:08.586015] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.262 [2024-07-26 11:17:08.586025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.262 [2024-07-26 11:17:08.586033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.262 11:17:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.262 11:17:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:49.262 [2024-07-26 11:17:08.588859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.262 11:17:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.262 11:17:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:49.262 11:17:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.262 [2024-07-26 11:17:08.598047] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.262 11:17:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:49.262 [2024-07-26 11:17:08.598798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.262 [2024-07-26 11:17:08.598816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0a980 with addr=10.0.0.2, port=4420 00:28:49.262 [2024-07-26 11:17:08.598824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0a980 is same with the state(5) to be set 00:28:49.262 [2024-07-26 11:17:08.599002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0a980 (9): Bad file descriptor 00:28:49.262 [2024-07-26 11:17:08.599184] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.262 [2024-07-26 11:17:08.599195] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.262 [2024-07-26 11:17:08.599203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.262 [2024-07-26 11:17:08.602024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.262 11:17:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.262 11:17:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:49.262 11:17:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.262 11:17:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:49.262 [2024-07-26 11:17:08.608615] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:49.262 [2024-07-26 11:17:08.611208] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.262 11:17:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.262 11:17:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1605967 00:28:49.262 [2024-07-26 11:17:08.689829] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:59.252 00:28:59.252 Latency(us) 00:28:59.252 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.252 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:59.252 Verification LBA range: start 0x0 length 0x4000 00:28:59.252 Nvme1n1 : 15.01 8017.12 31.32 12328.75 0.00 6271.14 1531.55 55164.22 00:28:59.252 =================================================================================================================== 00:28:59.252 Total : 8017.12 31.32 12328.75 0.00 6271.14 1531.55 55164.22 00:28:59.252 11:17:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:28:59.252 11:17:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:59.252 11:17:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.252 11:17:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:59.252 11:17:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.252 11:17:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:59.252 11:17:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:59.252 11:17:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:59.252 11:17:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:28:59.252 11:17:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:59.252 11:17:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:28:59.252 11:17:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:59.252 11:17:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:59.252 rmmod nvme_tcp 00:28:59.252 rmmod nvme_fabrics 00:28:59.252 rmmod nvme_keyring 00:28:59.252 11:17:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:59.252 11:17:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:28:59.252 11:17:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:28:59.252 11:17:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1606897 ']' 00:28:59.252 11:17:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1606897 00:28:59.252 11:17:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 1606897 ']' 00:28:59.252 11:17:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 1606897 00:28:59.252 11:17:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:28:59.252 11:17:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:59.252 11:17:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1606897 00:28:59.252 11:17:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:59.252 11:17:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:59.252 11:17:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1606897' 00:28:59.252 killing process with pid 1606897 00:28:59.253 11:17:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 1606897 00:28:59.253 11:17:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 1606897 00:28:59.253 11:17:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:59.253 11:17:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:59.253 11:17:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:59.253 11:17:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:59.253 11:17:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:59.253 11:17:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.253 11:17:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:59.253 11:17:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.194 11:17:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:00.194 00:29:00.194 real 0m26.104s 00:29:00.194 user 1m2.608s 00:29:00.194 sys 0m6.217s 00:29:00.194 11:17:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:00.194 11:17:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:00.194 ************************************ 00:29:00.194 END TEST nvmf_bdevperf 00:29:00.194 ************************************ 00:29:00.454 11:17:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:00.454 11:17:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:00.454 11:17:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:00.454 11:17:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.454 ************************************ 00:29:00.454 START TEST nvmf_target_disconnect 00:29:00.454 ************************************ 00:29:00.454 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:00.454 * Looking for test storage... 00:29:00.454 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:00.454 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:00.454 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:00.454 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:00.454 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:00.454 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:00.454 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:00.454 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:00.455 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:00.455 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:00.455 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:00.455 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:00.455 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:00.455 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:00.455 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:00.455 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:00.455 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:00.455 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:00.455 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:00.455 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:00.455 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:00.455 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:00.455 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:00.455 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.455 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.455 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.455 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:00.455 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.455 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:29:00.455 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:00.455 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:00.455 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:00.455 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:00.455 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:00.455 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:00.455 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:00.455 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:00.455 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:00.455 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:00.455 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:00.455 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:00.455 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:00.455 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:00.455 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:00.455 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:00.455 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:00.455 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:00.455 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:00.455 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.455 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:00.455 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:00.455 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:29:00.455 11:17:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:05.729 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:05.730 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:05.730 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:05.730 Found net devices under 0000:86:00.0: cvl_0_0 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:05.730 Found net devices under 0000:86:00.1: cvl_0_1 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:05.730 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:05.990 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:05.990 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:05.990 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:05.990 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:05.990 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:29:05.990 00:29:05.990 --- 10.0.0.2 ping statistics --- 00:29:05.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:05.990 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:29:05.990 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:05.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:05.990 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:29:05.990 00:29:05.990 --- 10.0.0.1 ping statistics --- 00:29:05.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:05.990 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:29:05.990 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:05.990 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:29:05.990 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:05.990 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:05.990 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:05.990 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:05.990 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:05.990 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:05.990 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:05.990 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:05.990 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:05.991 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:05.991 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:05.991 ************************************ 00:29:05.991 START TEST nvmf_target_disconnect_tc1 00:29:05.991 ************************************ 00:29:05.991 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:29:05.991 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:05.991 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:29:05.991 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:05.991 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:05.991 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:05.991 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:05.991 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:05.991 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:05.991 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:05.991 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:05.991 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:05.991 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:05.991 EAL: No free 2048 kB hugepages reported on node 1 00:29:05.991 [2024-07-26 11:17:25.459568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.991 [2024-07-26 11:17:25.459670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d5ae60 with addr=10.0.0.2, port=4420 00:29:05.991 [2024-07-26 11:17:25.459722] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:05.991 [2024-07-26 11:17:25.459747] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:05.991 [2024-07-26 11:17:25.459766] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:29:05.991 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:05.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:05.991 Initializing NVMe Controllers 00:29:05.991 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:29:05.991 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:05.991 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:05.991 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:05.991 00:29:05.991 real 0m0.102s 00:29:05.991 user 0m0.040s 00:29:05.991 sys 0m0.061s 00:29:05.991 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:05.991 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:05.991 ************************************ 00:29:05.991 END TEST nvmf_target_disconnect_tc1 00:29:05.991 ************************************ 00:29:06.250 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:06.250 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:06.250 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:06.251 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:06.251 ************************************ 00:29:06.251 START TEST nvmf_target_disconnect_tc2 00:29:06.251 ************************************ 00:29:06.251 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:29:06.251 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:06.251 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:06.251 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:06.251 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:06.251 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.251 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1612051 00:29:06.251 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1612051 00:29:06.251 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1612051 ']' 00:29:06.251 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:06.251 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:06.251 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:06.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:06.251 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:06.251 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.251 11:17:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:06.251 [2024-07-26 11:17:25.585524] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:06.251 [2024-07-26 11:17:25.585564] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:06.251 EAL: No free 2048 kB hugepages reported on node 1 00:29:06.251 [2024-07-26 11:17:25.654720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:06.251 [2024-07-26 11:17:25.735428] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:06.251 [2024-07-26 11:17:25.735465] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:06.251 [2024-07-26 11:17:25.735472] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:06.251 [2024-07-26 11:17:25.735478] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:06.251 [2024-07-26 11:17:25.735483] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:06.251 [2024-07-26 11:17:25.735592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:06.251 [2024-07-26 11:17:25.735699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:06.251 [2024-07-26 11:17:25.735803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:06.251 [2024-07-26 11:17:25.735804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:07.189 11:17:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:07.189 11:17:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:29:07.189 11:17:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:07.189 11:17:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:07.189 11:17:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.189 11:17:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:07.189 11:17:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:07.189 11:17:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.189 11:17:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.189 Malloc0 00:29:07.189 11:17:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.189 11:17:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:07.189 11:17:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.189 11:17:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.189 [2024-07-26 11:17:26.443136] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:07.189 11:17:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.189 11:17:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:07.189 11:17:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.189 11:17:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.189 11:17:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.189 11:17:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:07.189 11:17:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.189 11:17:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.189 11:17:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.189 11:17:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:07.189 11:17:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.189 11:17:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.189 [2024-07-26 11:17:26.468172] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:07.189 11:17:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.189 11:17:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:07.189 11:17:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.189 11:17:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.189 11:17:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.189 11:17:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1612091 00:29:07.189 11:17:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:07.190 11:17:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:07.190 EAL: No free 2048 kB hugepages reported on node 1 00:29:09.102 11:17:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1612051 00:29:09.102 11:17:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:09.102 Read completed with error (sct=0, sc=8) 00:29:09.102 starting I/O failed 00:29:09.102 Read completed with error (sct=0, sc=8) 00:29:09.102 starting I/O failed 00:29:09.102 Read completed with error (sct=0, sc=8) 00:29:09.102 starting I/O failed 00:29:09.102 Read completed with error (sct=0, sc=8) 00:29:09.102 starting I/O failed 00:29:09.102 Read completed with error (sct=0, sc=8) 00:29:09.102 starting I/O failed 00:29:09.102 Read completed with error (sct=0, sc=8) 00:29:09.102 starting I/O failed 00:29:09.102 Read completed with error (sct=0, sc=8) 00:29:09.102 starting I/O failed 00:29:09.102 Read completed with error (sct=0, sc=8) 00:29:09.102 starting I/O failed 00:29:09.102 Write completed with error (sct=0, sc=8) 00:29:09.102 starting I/O failed 00:29:09.102 Read completed with error (sct=0, sc=8) 00:29:09.102 starting I/O failed 00:29:09.102 Write completed with error (sct=0, sc=8) 00:29:09.102 starting I/O failed 00:29:09.102 Write completed with error (sct=0, sc=8) 00:29:09.102 starting I/O failed 00:29:09.102 Write completed with error (sct=0, sc=8) 00:29:09.102 starting I/O failed 00:29:09.102 Write completed with error (sct=0, sc=8) 00:29:09.102 starting I/O failed 00:29:09.102 Write completed with error (sct=0, sc=8) 00:29:09.102 starting I/O failed 00:29:09.103 Write completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Write completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Write completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Write completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Write completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Write completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Write completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Write completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 [2024-07-26 11:17:28.494316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Write completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Write completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Write completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Write completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Write completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Write completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Write completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Write completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Write completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Write completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 [2024-07-26 11:17:28.494514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Write completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Write completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Write completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Write completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Write completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Write completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Write completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Write completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Write completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Write completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Write completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 [2024-07-26 11:17:28.494719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Write completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Write completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Write completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Write completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Write completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Write completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Write completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Write completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Read completed with error (sct=0, sc=8) 00:29:09.103 starting I/O failed 00:29:09.103 Write completed with error (sct=0, sc=8) 00:29:09.104 starting I/O failed 00:29:09.104 Read completed with error (sct=0, sc=8) 00:29:09.104 starting I/O failed 00:29:09.104 Read completed with error (sct=0, sc=8) 00:29:09.104 starting I/O failed 00:29:09.104 Write completed with error (sct=0, sc=8) 00:29:09.104 starting I/O failed 00:29:09.104 Read completed with error (sct=0, sc=8) 00:29:09.104 starting I/O failed 00:29:09.104 Write completed with error (sct=0, sc=8) 00:29:09.104 starting I/O failed 00:29:09.104 Write completed with error (sct=0, sc=8) 00:29:09.104 starting I/O failed 00:29:09.104 [2024-07-26 11:17:28.494913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:09.104 [2024-07-26 11:17:28.495391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-07-26 11:17:28.495437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-07-26 11:17:28.496003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-07-26 11:17:28.496036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-07-26 11:17:28.496595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-07-26 11:17:28.496627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-07-26 11:17:28.497167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-07-26 11:17:28.497199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-07-26 11:17:28.497745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-07-26 11:17:28.497775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-07-26 11:17:28.498329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-07-26 11:17:28.498360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-07-26 11:17:28.498883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-07-26 11:17:28.498914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-07-26 11:17:28.499350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-07-26 11:17:28.499382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-07-26 11:17:28.499864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-07-26 11:17:28.499894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-07-26 11:17:28.500323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-07-26 11:17:28.500338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-07-26 11:17:28.500780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-07-26 11:17:28.500811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-07-26 11:17:28.501366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-07-26 11:17:28.501397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-07-26 11:17:28.501916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-07-26 11:17:28.501947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-07-26 11:17:28.502508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-07-26 11:17:28.502539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-07-26 11:17:28.503020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-07-26 11:17:28.503067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-07-26 11:17:28.503608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-07-26 11:17:28.503638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-07-26 11:17:28.504144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-07-26 11:17:28.504175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-07-26 11:17:28.504732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-07-26 11:17:28.504762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-07-26 11:17:28.505053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-07-26 11:17:28.505085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-07-26 11:17:28.505600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-07-26 11:17:28.505630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-07-26 11:17:28.505926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-07-26 11:17:28.505941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-07-26 11:17:28.506477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-07-26 11:17:28.506492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-07-26 11:17:28.506964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-07-26 11:17:28.506978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-07-26 11:17:28.507204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-07-26 11:17:28.507219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-07-26 11:17:28.507664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-07-26 11:17:28.507679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-07-26 11:17:28.508113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-07-26 11:17:28.508129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-07-26 11:17:28.508654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-07-26 11:17:28.508669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-07-26 11:17:28.509173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-07-26 11:17:28.509188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-07-26 11:17:28.509657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-07-26 11:17:28.509672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-07-26 11:17:28.510119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-07-26 11:17:28.510150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-07-26 11:17:28.510647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-07-26 11:17:28.510678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-07-26 11:17:28.511159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-07-26 11:17:28.511190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-07-26 11:17:28.511696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-07-26 11:17:28.511726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-07-26 11:17:28.512277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-07-26 11:17:28.512308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.104 qpair failed and we were unable to recover it. 00:29:09.104 [2024-07-26 11:17:28.512876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.104 [2024-07-26 11:17:28.512906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-07-26 11:17:28.513409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-07-26 11:17:28.513440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-07-26 11:17:28.514000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-07-26 11:17:28.514031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-07-26 11:17:28.514489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-07-26 11:17:28.514520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-07-26 11:17:28.514969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-07-26 11:17:28.514984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-07-26 11:17:28.515304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-07-26 11:17:28.515319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-07-26 11:17:28.515788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-07-26 11:17:28.515819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-07-26 11:17:28.516313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-07-26 11:17:28.516345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-07-26 11:17:28.516766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-07-26 11:17:28.516781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-07-26 11:17:28.517231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-07-26 11:17:28.517263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-07-26 11:17:28.517821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-07-26 11:17:28.517852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-07-26 11:17:28.518470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-07-26 11:17:28.518501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-07-26 11:17:28.518913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-07-26 11:17:28.518944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-07-26 11:17:28.519445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-07-26 11:17:28.519475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-07-26 11:17:28.519950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-07-26 11:17:28.519981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-07-26 11:17:28.520542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-07-26 11:17:28.520574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-07-26 11:17:28.521110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-07-26 11:17:28.521141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-07-26 11:17:28.521626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-07-26 11:17:28.521656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-07-26 11:17:28.522220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-07-26 11:17:28.522252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-07-26 11:17:28.522543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-07-26 11:17:28.522573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-07-26 11:17:28.523041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-07-26 11:17:28.523089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-07-26 11:17:28.523627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-07-26 11:17:28.523658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-07-26 11:17:28.524192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-07-26 11:17:28.524224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-07-26 11:17:28.524719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-07-26 11:17:28.524749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-07-26 11:17:28.525261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-07-26 11:17:28.525292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-07-26 11:17:28.525849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-07-26 11:17:28.525880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-07-26 11:17:28.526295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-07-26 11:17:28.526327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-07-26 11:17:28.526739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-07-26 11:17:28.526754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-07-26 11:17:28.527281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-07-26 11:17:28.527312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-07-26 11:17:28.527786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-07-26 11:17:28.527816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-07-26 11:17:28.528233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-07-26 11:17:28.528265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-07-26 11:17:28.528767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-07-26 11:17:28.528798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-07-26 11:17:28.529214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-07-26 11:17:28.529245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-07-26 11:17:28.529687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-07-26 11:17:28.529701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-07-26 11:17:28.530211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-07-26 11:17:28.530243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-07-26 11:17:28.530755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-07-26 11:17:28.530786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-07-26 11:17:28.531321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-07-26 11:17:28.531352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.105 [2024-07-26 11:17:28.531826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.105 [2024-07-26 11:17:28.531857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.105 qpair failed and we were unable to recover it. 00:29:09.106 [2024-07-26 11:17:28.532345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-07-26 11:17:28.532378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-07-26 11:17:28.532792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-07-26 11:17:28.532822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-07-26 11:17:28.533380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-07-26 11:17:28.533411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-07-26 11:17:28.533891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-07-26 11:17:28.533921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-07-26 11:17:28.534414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-07-26 11:17:28.534452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-07-26 11:17:28.534973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-07-26 11:17:28.534987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-07-26 11:17:28.535381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-07-26 11:17:28.535412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-07-26 11:17:28.535671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-07-26 11:17:28.535702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-07-26 11:17:28.536274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-07-26 11:17:28.536306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-07-26 11:17:28.536732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-07-26 11:17:28.536763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-07-26 11:17:28.537324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-07-26 11:17:28.537355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-07-26 11:17:28.537848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-07-26 11:17:28.537879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-07-26 11:17:28.538417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-07-26 11:17:28.538449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-07-26 11:17:28.538954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-07-26 11:17:28.538985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-07-26 11:17:28.539456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-07-26 11:17:28.539488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-07-26 11:17:28.539993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-07-26 11:17:28.540024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-07-26 11:17:28.540535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-07-26 11:17:28.540566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-07-26 11:17:28.541104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-07-26 11:17:28.541136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-07-26 11:17:28.541720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-07-26 11:17:28.541751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-07-26 11:17:28.542309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-07-26 11:17:28.542341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-07-26 11:17:28.542827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-07-26 11:17:28.542868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-07-26 11:17:28.543328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-07-26 11:17:28.543360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-07-26 11:17:28.543897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-07-26 11:17:28.543933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-07-26 11:17:28.544466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-07-26 11:17:28.544497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-07-26 11:17:28.545063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-07-26 11:17:28.545094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-07-26 11:17:28.545671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-07-26 11:17:28.545702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-07-26 11:17:28.545899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-07-26 11:17:28.545929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-07-26 11:17:28.546414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-07-26 11:17:28.546446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-07-26 11:17:28.546934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-07-26 11:17:28.546965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-07-26 11:17:28.547451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-07-26 11:17:28.547482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-07-26 11:17:28.548041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-07-26 11:17:28.548082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-07-26 11:17:28.548561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-07-26 11:17:28.548591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-07-26 11:17:28.549149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-07-26 11:17:28.549180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-07-26 11:17:28.549588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-07-26 11:17:28.549618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-07-26 11:17:28.550179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-07-26 11:17:28.550210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-07-26 11:17:28.550694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-07-26 11:17:28.550724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-07-26 11:17:28.551216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-07-26 11:17:28.551247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.106 qpair failed and we were unable to recover it. 00:29:09.106 [2024-07-26 11:17:28.551804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.106 [2024-07-26 11:17:28.551834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-07-26 11:17:28.552374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-07-26 11:17:28.552406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-07-26 11:17:28.553014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-07-26 11:17:28.553052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-07-26 11:17:28.553524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-07-26 11:17:28.553555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-07-26 11:17:28.554097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-07-26 11:17:28.554139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-07-26 11:17:28.554522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-07-26 11:17:28.554553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-07-26 11:17:28.555026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-07-26 11:17:28.555071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-07-26 11:17:28.555631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-07-26 11:17:28.555662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-07-26 11:17:28.556198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-07-26 11:17:28.556228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-07-26 11:17:28.556710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-07-26 11:17:28.556740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-07-26 11:17:28.557169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-07-26 11:17:28.557200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-07-26 11:17:28.557756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-07-26 11:17:28.557787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-07-26 11:17:28.558213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-07-26 11:17:28.558245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-07-26 11:17:28.558775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-07-26 11:17:28.558790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-07-26 11:17:28.559299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-07-26 11:17:28.559331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-07-26 11:17:28.559885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-07-26 11:17:28.559916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-07-26 11:17:28.560327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-07-26 11:17:28.560357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-07-26 11:17:28.560892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-07-26 11:17:28.560922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-07-26 11:17:28.561484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-07-26 11:17:28.561515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-07-26 11:17:28.561997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-07-26 11:17:28.562028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-07-26 11:17:28.562395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-07-26 11:17:28.562425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-07-26 11:17:28.562868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-07-26 11:17:28.562883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-07-26 11:17:28.563350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-07-26 11:17:28.563365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-07-26 11:17:28.563810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-07-26 11:17:28.563841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-07-26 11:17:28.564374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-07-26 11:17:28.564405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-07-26 11:17:28.564819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-07-26 11:17:28.564855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-07-26 11:17:28.565335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-07-26 11:17:28.565366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-07-26 11:17:28.565782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-07-26 11:17:28.565812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-07-26 11:17:28.566249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-07-26 11:17:28.566280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-07-26 11:17:28.566762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-07-26 11:17:28.566792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.107 [2024-07-26 11:17:28.567326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.107 [2024-07-26 11:17:28.567358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.107 qpair failed and we were unable to recover it. 00:29:09.108 [2024-07-26 11:17:28.567917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-07-26 11:17:28.567948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-07-26 11:17:28.568430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-07-26 11:17:28.568461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-07-26 11:17:28.569019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-07-26 11:17:28.569057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-07-26 11:17:28.569588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-07-26 11:17:28.569619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-07-26 11:17:28.570097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-07-26 11:17:28.570128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-07-26 11:17:28.570550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-07-26 11:17:28.570581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-07-26 11:17:28.571137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-07-26 11:17:28.571168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-07-26 11:17:28.571643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-07-26 11:17:28.571673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-07-26 11:17:28.572181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-07-26 11:17:28.572213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-07-26 11:17:28.572763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-07-26 11:17:28.572793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-07-26 11:17:28.573286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-07-26 11:17:28.573317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-07-26 11:17:28.573783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-07-26 11:17:28.573813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-07-26 11:17:28.574349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-07-26 11:17:28.574380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-07-26 11:17:28.574937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-07-26 11:17:28.574968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-07-26 11:17:28.575470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-07-26 11:17:28.575501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-07-26 11:17:28.575979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-07-26 11:17:28.576010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-07-26 11:17:28.576566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-07-26 11:17:28.576598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-07-26 11:17:28.577090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-07-26 11:17:28.577121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-07-26 11:17:28.577619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-07-26 11:17:28.577650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-07-26 11:17:28.578218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-07-26 11:17:28.578249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-07-26 11:17:28.578733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-07-26 11:17:28.578763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-07-26 11:17:28.579252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-07-26 11:17:28.579284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-07-26 11:17:28.579840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-07-26 11:17:28.579870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-07-26 11:17:28.580456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-07-26 11:17:28.580487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-07-26 11:17:28.580974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-07-26 11:17:28.581005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-07-26 11:17:28.581562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-07-26 11:17:28.581593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-07-26 11:17:28.582098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-07-26 11:17:28.582129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-07-26 11:17:28.582609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-07-26 11:17:28.582639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-07-26 11:17:28.583135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-07-26 11:17:28.583166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-07-26 11:17:28.583483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-07-26 11:17:28.583520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-07-26 11:17:28.584051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-07-26 11:17:28.584082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-07-26 11:17:28.584563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-07-26 11:17:28.584594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-07-26 11:17:28.585070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-07-26 11:17:28.585101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-07-26 11:17:28.585586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-07-26 11:17:28.585616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-07-26 11:17:28.586151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-07-26 11:17:28.586188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-07-26 11:17:28.586670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-07-26 11:17:28.586701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-07-26 11:17:28.587314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.108 [2024-07-26 11:17:28.587348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.108 qpair failed and we were unable to recover it. 00:29:09.108 [2024-07-26 11:17:28.587885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-07-26 11:17:28.587916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.109 qpair failed and we were unable to recover it. 00:29:09.109 [2024-07-26 11:17:28.588451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-07-26 11:17:28.588483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.109 qpair failed and we were unable to recover it. 00:29:09.109 [2024-07-26 11:17:28.588964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-07-26 11:17:28.588979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.109 qpair failed and we were unable to recover it. 00:29:09.109 [2024-07-26 11:17:28.589507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-07-26 11:17:28.589538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.109 qpair failed and we were unable to recover it. 00:29:09.109 [2024-07-26 11:17:28.590123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-07-26 11:17:28.590154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.109 qpair failed and we were unable to recover it. 00:29:09.109 [2024-07-26 11:17:28.590701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-07-26 11:17:28.590732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.109 qpair failed and we were unable to recover it. 00:29:09.109 [2024-07-26 11:17:28.591246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.109 [2024-07-26 11:17:28.591277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.109 qpair failed and we were unable to recover it. 00:29:09.425 [2024-07-26 11:17:28.591831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-07-26 11:17:28.591862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-07-26 11:17:28.592297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-07-26 11:17:28.592328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-07-26 11:17:28.592878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-07-26 11:17:28.592893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-07-26 11:17:28.593292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-07-26 11:17:28.593307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-07-26 11:17:28.593775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-07-26 11:17:28.593806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-07-26 11:17:28.594344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-07-26 11:17:28.594376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-07-26 11:17:28.594855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-07-26 11:17:28.594869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-07-26 11:17:28.595319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-07-26 11:17:28.595334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-07-26 11:17:28.595798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-07-26 11:17:28.595813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-07-26 11:17:28.596337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-07-26 11:17:28.596352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-07-26 11:17:28.596801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-07-26 11:17:28.596816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-07-26 11:17:28.597291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-07-26 11:17:28.597307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-07-26 11:17:28.597776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-07-26 11:17:28.597791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-07-26 11:17:28.598317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-07-26 11:17:28.598332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-07-26 11:17:28.598697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-07-26 11:17:28.598712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-07-26 11:17:28.599160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-07-26 11:17:28.599175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-07-26 11:17:28.599623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-07-26 11:17:28.599638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-07-26 11:17:28.600212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-07-26 11:17:28.600249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-07-26 11:17:28.600812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-07-26 11:17:28.600830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-07-26 11:17:28.601282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-07-26 11:17:28.601298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-07-26 11:17:28.601778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-07-26 11:17:28.601794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-07-26 11:17:28.602269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-07-26 11:17:28.602298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-07-26 11:17:28.602762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-07-26 11:17:28.602790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-07-26 11:17:28.603342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-07-26 11:17:28.603358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-07-26 11:17:28.603859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-07-26 11:17:28.603873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-07-26 11:17:28.604315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-07-26 11:17:28.604331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-07-26 11:17:28.604775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-07-26 11:17:28.604790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-07-26 11:17:28.605240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-07-26 11:17:28.605255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-07-26 11:17:28.605769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-07-26 11:17:28.605783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-07-26 11:17:28.606230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-07-26 11:17:28.606244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-07-26 11:17:28.606773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-07-26 11:17:28.606791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-07-26 11:17:28.607201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-07-26 11:17:28.607217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-07-26 11:17:28.607665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-07-26 11:17:28.607680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-07-26 11:17:28.608154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-07-26 11:17:28.608169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-07-26 11:17:28.608636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-07-26 11:17:28.608651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-07-26 11:17:28.609150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-07-26 11:17:28.609165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-07-26 11:17:28.609676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-07-26 11:17:28.609691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-07-26 11:17:28.610154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-07-26 11:17:28.610169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-07-26 11:17:28.610565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-07-26 11:17:28.610579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-07-26 11:17:28.611079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-07-26 11:17:28.611095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-07-26 11:17:28.611594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-07-26 11:17:28.611608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-07-26 11:17:28.612137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-07-26 11:17:28.612168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-07-26 11:17:28.612737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-07-26 11:17:28.612768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-07-26 11:17:28.613328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-07-26 11:17:28.613359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-07-26 11:17:28.613868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-07-26 11:17:28.613899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-07-26 11:17:28.614400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-07-26 11:17:28.614431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-07-26 11:17:28.614938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-07-26 11:17:28.614969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-07-26 11:17:28.615456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-07-26 11:17:28.615487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-07-26 11:17:28.615996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-07-26 11:17:28.616026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-07-26 11:17:28.616524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-07-26 11:17:28.616556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-07-26 11:17:28.617115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-07-26 11:17:28.617146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-07-26 11:17:28.617629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-07-26 11:17:28.617660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-07-26 11:17:28.618243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-07-26 11:17:28.618274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-07-26 11:17:28.618759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-07-26 11:17:28.618789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-07-26 11:17:28.619349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-07-26 11:17:28.619381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-07-26 11:17:28.619875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-07-26 11:17:28.619905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-07-26 11:17:28.620339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-07-26 11:17:28.620371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-07-26 11:17:28.620958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-07-26 11:17:28.620989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-07-26 11:17:28.621470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-07-26 11:17:28.621502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-07-26 11:17:28.621977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-07-26 11:17:28.622007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-07-26 11:17:28.622590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-07-26 11:17:28.622621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-07-26 11:17:28.623105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-07-26 11:17:28.623137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-07-26 11:17:28.623619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-07-26 11:17:28.623650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-07-26 11:17:28.624206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-07-26 11:17:28.624237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-07-26 11:17:28.624717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-07-26 11:17:28.624748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-07-26 11:17:28.625232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-07-26 11:17:28.625263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-07-26 11:17:28.625524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-07-26 11:17:28.625555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-07-26 11:17:28.626091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-07-26 11:17:28.626122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-07-26 11:17:28.626694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-07-26 11:17:28.626725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.427 [2024-07-26 11:17:28.627281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-07-26 11:17:28.627313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-07-26 11:17:28.627809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-07-26 11:17:28.627845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-07-26 11:17:28.628381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-07-26 11:17:28.628412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-07-26 11:17:28.628840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-07-26 11:17:28.628870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-07-26 11:17:28.629383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-07-26 11:17:28.629431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-07-26 11:17:28.629913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-07-26 11:17:28.629943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-07-26 11:17:28.630502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-07-26 11:17:28.630533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-07-26 11:17:28.630955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-07-26 11:17:28.630993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-07-26 11:17:28.631512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-07-26 11:17:28.631544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-07-26 11:17:28.632023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-07-26 11:17:28.632062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-07-26 11:17:28.632620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-07-26 11:17:28.632650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-07-26 11:17:28.633080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-07-26 11:17:28.633112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-07-26 11:17:28.633535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-07-26 11:17:28.633565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-07-26 11:17:28.634096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-07-26 11:17:28.634127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-07-26 11:17:28.634613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-07-26 11:17:28.634643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-07-26 11:17:28.635119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-07-26 11:17:28.635151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-07-26 11:17:28.635715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-07-26 11:17:28.635745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-07-26 11:17:28.636093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-07-26 11:17:28.636124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-07-26 11:17:28.636631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-07-26 11:17:28.636661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-07-26 11:17:28.637246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-07-26 11:17:28.637277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-07-26 11:17:28.637837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-07-26 11:17:28.637868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-07-26 11:17:28.638412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-07-26 11:17:28.638442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-07-26 11:17:28.638924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-07-26 11:17:28.638954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-07-26 11:17:28.639488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-07-26 11:17:28.639519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-07-26 11:17:28.639997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-07-26 11:17:28.640027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-07-26 11:17:28.640541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-07-26 11:17:28.640571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-07-26 11:17:28.641110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-07-26 11:17:28.641141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-07-26 11:17:28.641678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-07-26 11:17:28.641708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-07-26 11:17:28.642266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-07-26 11:17:28.642297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-07-26 11:17:28.642822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-07-26 11:17:28.642853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-07-26 11:17:28.643404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-07-26 11:17:28.643436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-07-26 11:17:28.643995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-07-26 11:17:28.644026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-07-26 11:17:28.644389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-07-26 11:17:28.644420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-07-26 11:17:28.644871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-07-26 11:17:28.644902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-07-26 11:17:28.645420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-07-26 11:17:28.645435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-07-26 11:17:28.645938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-07-26 11:17:28.645969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-07-26 11:17:28.646397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-07-26 11:17:28.646428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-07-26 11:17:28.646986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-07-26 11:17:28.647016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-07-26 11:17:28.647617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-07-26 11:17:28.647649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-07-26 11:17:28.648135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-07-26 11:17:28.648166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-07-26 11:17:28.648721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-07-26 11:17:28.648752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-07-26 11:17:28.649230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-07-26 11:17:28.649267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-07-26 11:17:28.649761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-07-26 11:17:28.649791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-07-26 11:17:28.650277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-07-26 11:17:28.650308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-07-26 11:17:28.650856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-07-26 11:17:28.650886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-07-26 11:17:28.651367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-07-26 11:17:28.651398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-07-26 11:17:28.651889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-07-26 11:17:28.651919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-07-26 11:17:28.652335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-07-26 11:17:28.652367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-07-26 11:17:28.652913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-07-26 11:17:28.652943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-07-26 11:17:28.653428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-07-26 11:17:28.653458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-07-26 11:17:28.653992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-07-26 11:17:28.654022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-07-26 11:17:28.654564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-07-26 11:17:28.654595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-07-26 11:17:28.655089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-07-26 11:17:28.655121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-07-26 11:17:28.655719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-07-26 11:17:28.655749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-07-26 11:17:28.656226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-07-26 11:17:28.656258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-07-26 11:17:28.656828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-07-26 11:17:28.656858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-07-26 11:17:28.657345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-07-26 11:17:28.657376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-07-26 11:17:28.657852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-07-26 11:17:28.657882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-07-26 11:17:28.658377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-07-26 11:17:28.658408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-07-26 11:17:28.658992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-07-26 11:17:28.659023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-07-26 11:17:28.659514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-07-26 11:17:28.659544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-07-26 11:17:28.660099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-07-26 11:17:28.660130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-07-26 11:17:28.660610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-07-26 11:17:28.660640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-07-26 11:17:28.661194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-07-26 11:17:28.661224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-07-26 11:17:28.661807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-07-26 11:17:28.661837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-07-26 11:17:28.662324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-07-26 11:17:28.662356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-07-26 11:17:28.662829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-07-26 11:17:28.662871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-07-26 11:17:28.663375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-07-26 11:17:28.663406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-07-26 11:17:28.663946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-07-26 11:17:28.663977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-07-26 11:17:28.664533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-07-26 11:17:28.664564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-07-26 11:17:28.664996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-07-26 11:17:28.665027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-07-26 11:17:28.665443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-07-26 11:17:28.665472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-07-26 11:17:28.665951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-07-26 11:17:28.665981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-07-26 11:17:28.666454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-07-26 11:17:28.666469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-07-26 11:17:28.666942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-07-26 11:17:28.666973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.429 [2024-07-26 11:17:28.667448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-07-26 11:17:28.667479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-07-26 11:17:28.667969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-07-26 11:17:28.668012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-07-26 11:17:28.668535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-07-26 11:17:28.668550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-07-26 11:17:28.668937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-07-26 11:17:28.668968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-07-26 11:17:28.669398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-07-26 11:17:28.669429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-07-26 11:17:28.669983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-07-26 11:17:28.670012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-07-26 11:17:28.670588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-07-26 11:17:28.670625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-07-26 11:17:28.671079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-07-26 11:17:28.671111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-07-26 11:17:28.671665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-07-26 11:17:28.671697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-07-26 11:17:28.672203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-07-26 11:17:28.672235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-07-26 11:17:28.672713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-07-26 11:17:28.672744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-07-26 11:17:28.673315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-07-26 11:17:28.673347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-07-26 11:17:28.673896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-07-26 11:17:28.673926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-07-26 11:17:28.674403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-07-26 11:17:28.674434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-07-26 11:17:28.674970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-07-26 11:17:28.675000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-07-26 11:17:28.675484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-07-26 11:17:28.675515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-07-26 11:17:28.676058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-07-26 11:17:28.676089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-07-26 11:17:28.676652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-07-26 11:17:28.676682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-07-26 11:17:28.677168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-07-26 11:17:28.677200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-07-26 11:17:28.677734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-07-26 11:17:28.677765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-07-26 11:17:28.678259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-07-26 11:17:28.678290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-07-26 11:17:28.678851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-07-26 11:17:28.678889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-07-26 11:17:28.679335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-07-26 11:17:28.679367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-07-26 11:17:28.679900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-07-26 11:17:28.679930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-07-26 11:17:28.680427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-07-26 11:17:28.680458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-07-26 11:17:28.680993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-07-26 11:17:28.681023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.430 [2024-07-26 11:17:28.681512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-07-26 11:17:28.681545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-07-26 11:17:28.681803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-07-26 11:17:28.681833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-07-26 11:17:28.682322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-07-26 11:17:28.682354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-07-26 11:17:28.682849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-07-26 11:17:28.682864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-07-26 11:17:28.683397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-07-26 11:17:28.683428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-07-26 11:17:28.683867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-07-26 11:17:28.683898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-07-26 11:17:28.684370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-07-26 11:17:28.684400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-07-26 11:17:28.684928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-07-26 11:17:28.684963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-07-26 11:17:28.685449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-07-26 11:17:28.685480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-07-26 11:17:28.685970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-07-26 11:17:28.686001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-07-26 11:17:28.686588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-07-26 11:17:28.686619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-07-26 11:17:28.687103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-07-26 11:17:28.687135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-07-26 11:17:28.687613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-07-26 11:17:28.687647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-07-26 11:17:28.688184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-07-26 11:17:28.688217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-07-26 11:17:28.688777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-07-26 11:17:28.688808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-07-26 11:17:28.689336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-07-26 11:17:28.689373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-07-26 11:17:28.689905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-07-26 11:17:28.689947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-07-26 11:17:28.690454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-07-26 11:17:28.690486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-07-26 11:17:28.690963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-07-26 11:17:28.690994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-07-26 11:17:28.691481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-07-26 11:17:28.691512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-07-26 11:17:28.692056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-07-26 11:17:28.692088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-07-26 11:17:28.692631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-07-26 11:17:28.692662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-07-26 11:17:28.693195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-07-26 11:17:28.693227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-07-26 11:17:28.693793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-07-26 11:17:28.693824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-07-26 11:17:28.694298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-07-26 11:17:28.694328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-07-26 11:17:28.694861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-07-26 11:17:28.694892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-07-26 11:17:28.695399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-07-26 11:17:28.695431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-07-26 11:17:28.695908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-07-26 11:17:28.695939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-07-26 11:17:28.696425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-07-26 11:17:28.696456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-07-26 11:17:28.697011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-07-26 11:17:28.697041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-07-26 11:17:28.697558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-07-26 11:17:28.697589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-07-26 11:17:28.698005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-07-26 11:17:28.698035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-07-26 11:17:28.698584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-07-26 11:17:28.698599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-07-26 11:17:28.699143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-07-26 11:17:28.699175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-07-26 11:17:28.699726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-07-26 11:17:28.699756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-07-26 11:17:28.700294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-07-26 11:17:28.700309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-07-26 11:17:28.700837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-07-26 11:17:28.700875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-07-26 11:17:28.701120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-07-26 11:17:28.701149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-07-26 11:17:28.701637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-07-26 11:17:28.701668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-07-26 11:17:28.702225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-07-26 11:17:28.702255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-07-26 11:17:28.702783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-07-26 11:17:28.702813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-07-26 11:17:28.703288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-07-26 11:17:28.703320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-07-26 11:17:28.703800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-07-26 11:17:28.703830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-07-26 11:17:28.704385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-07-26 11:17:28.704411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-07-26 11:17:28.704918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-07-26 11:17:28.704948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-07-26 11:17:28.705481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-07-26 11:17:28.705512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-07-26 11:17:28.706016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-07-26 11:17:28.706054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-07-26 11:17:28.706566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-07-26 11:17:28.706603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-07-26 11:17:28.707144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-07-26 11:17:28.707176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-07-26 11:17:28.707654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-07-26 11:17:28.707684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-07-26 11:17:28.708240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-07-26 11:17:28.708270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-07-26 11:17:28.708818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-07-26 11:17:28.708849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-07-26 11:17:28.709386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-07-26 11:17:28.709418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-07-26 11:17:28.709899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-07-26 11:17:28.709928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-07-26 11:17:28.710432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-07-26 11:17:28.710464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-07-26 11:17:28.710974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-07-26 11:17:28.711004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-07-26 11:17:28.711600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-07-26 11:17:28.711632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-07-26 11:17:28.712194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-07-26 11:17:28.712226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-07-26 11:17:28.712745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-07-26 11:17:28.712775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-07-26 11:17:28.713260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-07-26 11:17:28.713291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-07-26 11:17:28.713790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-07-26 11:17:28.713821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-07-26 11:17:28.714315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-07-26 11:17:28.714330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-07-26 11:17:28.714776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-07-26 11:17:28.714807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-07-26 11:17:28.715017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-07-26 11:17:28.715057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-07-26 11:17:28.715600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-07-26 11:17:28.715631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-07-26 11:17:28.716191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-07-26 11:17:28.716222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-07-26 11:17:28.716755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-07-26 11:17:28.716786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-07-26 11:17:28.717328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-07-26 11:17:28.717358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-07-26 11:17:28.717830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-07-26 11:17:28.717860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-07-26 11:17:28.718419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-07-26 11:17:28.718450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-07-26 11:17:28.718877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-07-26 11:17:28.718908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-07-26 11:17:28.719371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-07-26 11:17:28.719386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-07-26 11:17:28.719913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-07-26 11:17:28.719943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-07-26 11:17:28.720452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-07-26 11:17:28.720484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-07-26 11:17:28.720973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-07-26 11:17:28.721004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.432 [2024-07-26 11:17:28.721560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.432 [2024-07-26 11:17:28.721591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.432 qpair failed and we were unable to recover it. 00:29:09.432 [2024-07-26 11:17:28.722142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.432 [2024-07-26 11:17:28.722158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.432 qpair failed and we were unable to recover it. 00:29:09.432 [2024-07-26 11:17:28.722628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.432 [2024-07-26 11:17:28.722658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.432 qpair failed and we were unable to recover it. 00:29:09.432 [2024-07-26 11:17:28.723126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.432 [2024-07-26 11:17:28.723158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.432 qpair failed and we were unable to recover it. 00:29:09.432 [2024-07-26 11:17:28.723719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.432 [2024-07-26 11:17:28.723750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.432 qpair failed and we were unable to recover it. 00:29:09.432 [2024-07-26 11:17:28.724225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.432 [2024-07-26 11:17:28.724257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.432 qpair failed and we were unable to recover it. 00:29:09.432 [2024-07-26 11:17:28.724835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.432 [2024-07-26 11:17:28.724866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.432 qpair failed and we were unable to recover it. 00:29:09.432 [2024-07-26 11:17:28.725428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.432 [2024-07-26 11:17:28.725460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.432 qpair failed and we were unable to recover it. 00:29:09.432 [2024-07-26 11:17:28.726027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.432 [2024-07-26 11:17:28.726065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.432 qpair failed and we were unable to recover it. 00:29:09.432 [2024-07-26 11:17:28.726533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.432 [2024-07-26 11:17:28.726565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.432 qpair failed and we were unable to recover it. 00:29:09.432 [2024-07-26 11:17:28.727101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.432 [2024-07-26 11:17:28.727133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.432 qpair failed and we were unable to recover it. 00:29:09.432 [2024-07-26 11:17:28.727689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.432 [2024-07-26 11:17:28.727720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.432 qpair failed and we were unable to recover it. 00:29:09.432 [2024-07-26 11:17:28.728196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.432 [2024-07-26 11:17:28.728238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.432 qpair failed and we were unable to recover it. 00:29:09.432 [2024-07-26 11:17:28.728771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.432 [2024-07-26 11:17:28.728802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.432 qpair failed and we were unable to recover it. 00:29:09.432 [2024-07-26 11:17:28.729343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.432 [2024-07-26 11:17:28.729374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.432 qpair failed and we were unable to recover it. 00:29:09.432 [2024-07-26 11:17:28.729841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.432 [2024-07-26 11:17:28.729872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.432 qpair failed and we were unable to recover it. 00:29:09.432 [2024-07-26 11:17:28.730355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.432 [2024-07-26 11:17:28.730386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.432 qpair failed and we were unable to recover it. 00:29:09.432 [2024-07-26 11:17:28.730941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.432 [2024-07-26 11:17:28.730971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.432 qpair failed and we were unable to recover it. 00:29:09.432 [2024-07-26 11:17:28.731533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.432 [2024-07-26 11:17:28.731563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.432 qpair failed and we were unable to recover it. 00:29:09.432 [2024-07-26 11:17:28.731971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.432 [2024-07-26 11:17:28.731999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.432 qpair failed and we were unable to recover it. 00:29:09.432 [2024-07-26 11:17:28.732536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.432 [2024-07-26 11:17:28.732551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.432 qpair failed and we were unable to recover it. 00:29:09.432 [2024-07-26 11:17:28.733011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.432 [2024-07-26 11:17:28.733051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.432 qpair failed and we were unable to recover it. 00:29:09.432 [2024-07-26 11:17:28.733613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.432 [2024-07-26 11:17:28.733643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.432 qpair failed and we were unable to recover it. 00:29:09.432 [2024-07-26 11:17:28.733979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.432 [2024-07-26 11:17:28.734009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.432 qpair failed and we were unable to recover it. 00:29:09.432 [2024-07-26 11:17:28.734440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.432 [2024-07-26 11:17:28.734455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.432 qpair failed and we were unable to recover it. 00:29:09.432 [2024-07-26 11:17:28.734994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.432 [2024-07-26 11:17:28.735024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.432 qpair failed and we were unable to recover it. 00:29:09.432 [2024-07-26 11:17:28.735609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.432 [2024-07-26 11:17:28.735640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.432 qpair failed and we were unable to recover it. 00:29:09.432 [2024-07-26 11:17:28.736200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.432 [2024-07-26 11:17:28.736233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.432 qpair failed and we were unable to recover it. 00:29:09.432 [2024-07-26 11:17:28.736717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.432 [2024-07-26 11:17:28.736747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.432 qpair failed and we were unable to recover it. 00:29:09.432 [2024-07-26 11:17:28.737329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.432 [2024-07-26 11:17:28.737361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.432 qpair failed and we were unable to recover it. 00:29:09.432 [2024-07-26 11:17:28.737850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.432 [2024-07-26 11:17:28.737880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.432 qpair failed and we were unable to recover it. 00:29:09.432 [2024-07-26 11:17:28.738354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.432 [2024-07-26 11:17:28.738385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.432 qpair failed and we were unable to recover it. 00:29:09.432 [2024-07-26 11:17:28.738875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.432 [2024-07-26 11:17:28.738905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.432 qpair failed and we were unable to recover it. 00:29:09.432 [2024-07-26 11:17:28.739472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.432 [2024-07-26 11:17:28.739503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.432 qpair failed and we were unable to recover it. 00:29:09.432 [2024-07-26 11:17:28.740078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.432 [2024-07-26 11:17:28.740110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.432 qpair failed and we were unable to recover it. 00:29:09.432 [2024-07-26 11:17:28.740672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.432 [2024-07-26 11:17:28.740702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.432 qpair failed and we were unable to recover it. 00:29:09.432 [2024-07-26 11:17:28.741192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.432 [2024-07-26 11:17:28.741222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.432 qpair failed and we were unable to recover it. 00:29:09.432 [2024-07-26 11:17:28.741803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.433 [2024-07-26 11:17:28.741834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.433 qpair failed and we were unable to recover it. 00:29:09.433 [2024-07-26 11:17:28.742320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.433 [2024-07-26 11:17:28.742351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.433 qpair failed and we were unable to recover it. 00:29:09.433 [2024-07-26 11:17:28.742928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.433 [2024-07-26 11:17:28.742958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.433 qpair failed and we were unable to recover it. 00:29:09.433 [2024-07-26 11:17:28.743492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.433 [2024-07-26 11:17:28.743524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.433 qpair failed and we were unable to recover it. 00:29:09.433 [2024-07-26 11:17:28.744064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.433 [2024-07-26 11:17:28.744096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.433 qpair failed and we were unable to recover it. 00:29:09.433 [2024-07-26 11:17:28.744574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.433 [2024-07-26 11:17:28.744605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.433 qpair failed and we were unable to recover it. 00:29:09.433 [2024-07-26 11:17:28.745141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.433 [2024-07-26 11:17:28.745173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.433 qpair failed and we were unable to recover it. 00:29:09.433 [2024-07-26 11:17:28.746013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.433 [2024-07-26 11:17:28.746053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.433 qpair failed and we were unable to recover it. 00:29:09.433 [2024-07-26 11:17:28.746605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.433 [2024-07-26 11:17:28.746620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.433 qpair failed and we were unable to recover it. 00:29:09.433 [2024-07-26 11:17:28.747151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.433 [2024-07-26 11:17:28.747182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.433 qpair failed and we were unable to recover it. 00:29:09.433 [2024-07-26 11:17:28.747716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.433 [2024-07-26 11:17:28.747746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.433 qpair failed and we were unable to recover it. 00:29:09.433 [2024-07-26 11:17:28.748228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.433 [2024-07-26 11:17:28.748259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.433 qpair failed and we were unable to recover it. 00:29:09.433 [2024-07-26 11:17:28.748792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.433 [2024-07-26 11:17:28.748822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.433 qpair failed and we were unable to recover it. 00:29:09.433 [2024-07-26 11:17:28.749317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.433 [2024-07-26 11:17:28.749332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.433 qpair failed and we were unable to recover it. 00:29:09.433 [2024-07-26 11:17:28.749828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.433 [2024-07-26 11:17:28.749843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.433 qpair failed and we were unable to recover it. 00:29:09.433 [2024-07-26 11:17:28.750344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.433 [2024-07-26 11:17:28.750361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.433 qpair failed and we were unable to recover it. 00:29:09.433 [2024-07-26 11:17:28.750839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.433 [2024-07-26 11:17:28.750853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.433 qpair failed and we were unable to recover it. 00:29:09.433 [2024-07-26 11:17:28.751304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.433 [2024-07-26 11:17:28.751319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.433 qpair failed and we were unable to recover it. 00:29:09.433 [2024-07-26 11:17:28.751803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.433 [2024-07-26 11:17:28.751833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.433 qpair failed and we were unable to recover it. 00:29:09.433 [2024-07-26 11:17:28.752313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.433 [2024-07-26 11:17:28.752345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.433 qpair failed and we were unable to recover it. 00:29:09.433 [2024-07-26 11:17:28.752920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.433 [2024-07-26 11:17:28.752935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.433 qpair failed and we were unable to recover it. 00:29:09.433 [2024-07-26 11:17:28.753422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.433 [2024-07-26 11:17:28.753453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.433 qpair failed and we were unable to recover it. 00:29:09.433 [2024-07-26 11:17:28.753893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.433 [2024-07-26 11:17:28.753922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.433 qpair failed and we were unable to recover it. 00:29:09.433 [2024-07-26 11:17:28.754367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.433 [2024-07-26 11:17:28.754382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.433 qpair failed and we were unable to recover it. 00:29:09.433 [2024-07-26 11:17:28.754842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.433 [2024-07-26 11:17:28.754872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.433 qpair failed and we were unable to recover it. 00:29:09.433 [2024-07-26 11:17:28.755400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.433 [2024-07-26 11:17:28.755415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.433 qpair failed and we were unable to recover it. 00:29:09.433 [2024-07-26 11:17:28.755907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.433 [2024-07-26 11:17:28.755921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.433 qpair failed and we were unable to recover it. 00:29:09.433 [2024-07-26 11:17:28.756383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.433 [2024-07-26 11:17:28.756414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.433 qpair failed and we were unable to recover it. 00:29:09.433 [2024-07-26 11:17:28.756861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.433 [2024-07-26 11:17:28.756892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.433 qpair failed and we were unable to recover it. 00:29:09.433 [2024-07-26 11:17:28.757452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.433 [2024-07-26 11:17:28.757467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.433 qpair failed and we were unable to recover it. 00:29:09.433 [2024-07-26 11:17:28.757839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.433 [2024-07-26 11:17:28.757852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.433 qpair failed and we were unable to recover it. 00:29:09.433 [2024-07-26 11:17:28.758319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.433 [2024-07-26 11:17:28.758334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.433 qpair failed and we were unable to recover it. 00:29:09.433 [2024-07-26 11:17:28.758786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.433 [2024-07-26 11:17:28.758802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.433 qpair failed and we were unable to recover it. 00:29:09.433 [2024-07-26 11:17:28.759084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.433 [2024-07-26 11:17:28.759100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.433 qpair failed and we were unable to recover it. 00:29:09.433 [2024-07-26 11:17:28.759561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.433 [2024-07-26 11:17:28.759591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.433 qpair failed and we were unable to recover it. 00:29:09.433 [2024-07-26 11:17:28.760081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.433 [2024-07-26 11:17:28.760114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.433 qpair failed and we were unable to recover it. 00:29:09.433 [2024-07-26 11:17:28.760533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.433 [2024-07-26 11:17:28.760562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.433 qpair failed and we were unable to recover it. 00:29:09.433 [2024-07-26 11:17:28.761037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.433 [2024-07-26 11:17:28.761056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.433 qpair failed and we were unable to recover it. 00:29:09.434 [2024-07-26 11:17:28.761795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.434 [2024-07-26 11:17:28.761828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.434 qpair failed and we were unable to recover it. 00:29:09.434 [2024-07-26 11:17:28.762318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.434 [2024-07-26 11:17:28.762334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.434 qpair failed and we were unable to recover it. 00:29:09.434 [2024-07-26 11:17:28.762870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.434 [2024-07-26 11:17:28.762885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.434 qpair failed and we were unable to recover it. 00:29:09.434 [2024-07-26 11:17:28.763345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.434 [2024-07-26 11:17:28.763377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.434 qpair failed and we were unable to recover it. 00:29:09.434 [2024-07-26 11:17:28.763867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.434 [2024-07-26 11:17:28.763898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.434 qpair failed and we were unable to recover it. 00:29:09.434 [2024-07-26 11:17:28.764453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.434 [2024-07-26 11:17:28.764484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.434 qpair failed and we were unable to recover it. 00:29:09.434 [2024-07-26 11:17:28.764949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.434 [2024-07-26 11:17:28.764964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.434 qpair failed and we were unable to recover it. 00:29:09.434 [2024-07-26 11:17:28.765467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.434 [2024-07-26 11:17:28.765482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.434 qpair failed and we were unable to recover it. 00:29:09.434 [2024-07-26 11:17:28.766001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.434 [2024-07-26 11:17:28.766032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.434 qpair failed and we were unable to recover it. 00:29:09.434 [2024-07-26 11:17:28.766533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.434 [2024-07-26 11:17:28.766563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.434 qpair failed and we were unable to recover it. 00:29:09.434 [2024-07-26 11:17:28.766823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.434 [2024-07-26 11:17:28.766853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.434 qpair failed and we were unable to recover it. 00:29:09.434 [2024-07-26 11:17:28.767329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.434 [2024-07-26 11:17:28.767361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.434 qpair failed and we were unable to recover it. 00:29:09.434 [2024-07-26 11:17:28.767917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.434 [2024-07-26 11:17:28.767947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.434 qpair failed and we were unable to recover it. 00:29:09.434 [2024-07-26 11:17:28.768477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.434 [2024-07-26 11:17:28.768493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.434 qpair failed and we were unable to recover it. 00:29:09.434 [2024-07-26 11:17:28.768962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.434 [2024-07-26 11:17:28.768992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.434 qpair failed and we were unable to recover it. 00:29:09.434 [2024-07-26 11:17:28.769533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.434 [2024-07-26 11:17:28.769565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.434 qpair failed and we were unable to recover it. 00:29:09.434 [2024-07-26 11:17:28.770053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.434 [2024-07-26 11:17:28.770085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.434 qpair failed and we were unable to recover it. 00:29:09.434 [2024-07-26 11:17:28.770663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.434 [2024-07-26 11:17:28.770680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.435 qpair failed and we were unable to recover it. 00:29:09.435 [2024-07-26 11:17:28.771204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.435 [2024-07-26 11:17:28.771220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.436 qpair failed and we were unable to recover it. 00:29:09.436 [2024-07-26 11:17:28.771771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.436 [2024-07-26 11:17:28.771801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.436 qpair failed and we were unable to recover it. 00:29:09.436 [2024-07-26 11:17:28.772287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.436 [2024-07-26 11:17:28.772303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.436 qpair failed and we were unable to recover it. 00:29:09.436 [2024-07-26 11:17:28.772859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.436 [2024-07-26 11:17:28.772889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.436 qpair failed and we were unable to recover it. 00:29:09.436 [2024-07-26 11:17:28.773392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.436 [2024-07-26 11:17:28.773424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.436 qpair failed and we were unable to recover it. 00:29:09.436 [2024-07-26 11:17:28.773953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.436 [2024-07-26 11:17:28.773968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.436 qpair failed and we were unable to recover it. 00:29:09.436 [2024-07-26 11:17:28.774423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.436 [2024-07-26 11:17:28.774439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.436 qpair failed and we were unable to recover it. 00:29:09.436 [2024-07-26 11:17:28.775179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.436 [2024-07-26 11:17:28.775196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.436 qpair failed and we were unable to recover it. 00:29:09.436 [2024-07-26 11:17:28.775362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.436 [2024-07-26 11:17:28.775376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.436 qpair failed and we were unable to recover it. 00:29:09.436 [2024-07-26 11:17:28.775660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.436 [2024-07-26 11:17:28.775675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.436 qpair failed and we were unable to recover it. 00:29:09.436 [2024-07-26 11:17:28.776173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.436 [2024-07-26 11:17:28.776189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.436 qpair failed and we were unable to recover it. 00:29:09.436 [2024-07-26 11:17:28.776739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.436 [2024-07-26 11:17:28.776771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.436 qpair failed and we were unable to recover it. 00:29:09.436 [2024-07-26 11:17:28.777251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.436 [2024-07-26 11:17:28.777282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.436 qpair failed and we were unable to recover it. 00:29:09.436 [2024-07-26 11:17:28.777769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.436 [2024-07-26 11:17:28.777800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.436 qpair failed and we were unable to recover it. 00:29:09.436 [2024-07-26 11:17:28.778327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.436 [2024-07-26 11:17:28.778342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.436 qpair failed and we were unable to recover it. 00:29:09.436 [2024-07-26 11:17:28.778807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.436 [2024-07-26 11:17:28.778822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.436 qpair failed and we were unable to recover it. 00:29:09.436 [2024-07-26 11:17:28.779267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.436 [2024-07-26 11:17:28.779281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.436 qpair failed and we were unable to recover it. 00:29:09.436 [2024-07-26 11:17:28.779725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.436 [2024-07-26 11:17:28.779738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.436 qpair failed and we were unable to recover it. 00:29:09.436 [2024-07-26 11:17:28.780054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.436 [2024-07-26 11:17:28.780087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.436 qpair failed and we were unable to recover it. 00:29:09.436 [2024-07-26 11:17:28.780646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.436 [2024-07-26 11:17:28.780676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.436 qpair failed and we were unable to recover it. 00:29:09.436 [2024-07-26 11:17:28.781103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.436 [2024-07-26 11:17:28.781135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.436 qpair failed and we were unable to recover it. 00:29:09.436 [2024-07-26 11:17:28.781695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.436 [2024-07-26 11:17:28.781709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.436 qpair failed and we were unable to recover it. 00:29:09.436 [2024-07-26 11:17:28.782238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.436 [2024-07-26 11:17:28.782270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.436 qpair failed and we were unable to recover it. 00:29:09.436 [2024-07-26 11:17:28.782830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.436 [2024-07-26 11:17:28.782859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.436 qpair failed and we were unable to recover it. 00:29:09.436 [2024-07-26 11:17:28.783361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.436 [2024-07-26 11:17:28.783377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.436 qpair failed and we were unable to recover it. 00:29:09.436 [2024-07-26 11:17:28.783904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.436 [2024-07-26 11:17:28.783935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.436 qpair failed and we were unable to recover it. 00:29:09.436 [2024-07-26 11:17:28.784475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.436 [2024-07-26 11:17:28.784507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.436 qpair failed and we were unable to recover it. 00:29:09.436 [2024-07-26 11:17:28.785035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.436 [2024-07-26 11:17:28.785055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.436 qpair failed and we were unable to recover it. 00:29:09.436 [2024-07-26 11:17:28.785556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.436 [2024-07-26 11:17:28.785586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.436 qpair failed and we were unable to recover it. 00:29:09.436 [2024-07-26 11:17:28.786139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.436 [2024-07-26 11:17:28.786154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.436 qpair failed and we were unable to recover it. 00:29:09.436 [2024-07-26 11:17:28.786663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.436 [2024-07-26 11:17:28.786678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.436 qpair failed and we were unable to recover it. 00:29:09.436 [2024-07-26 11:17:28.787124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.436 [2024-07-26 11:17:28.787139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.436 qpair failed and we were unable to recover it. 00:29:09.436 [2024-07-26 11:17:28.787525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.436 [2024-07-26 11:17:28.787557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.436 qpair failed and we were unable to recover it. 00:29:09.436 [2024-07-26 11:17:28.788026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.436 [2024-07-26 11:17:28.788064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.436 qpair failed and we were unable to recover it. 00:29:09.436 [2024-07-26 11:17:28.788649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.436 [2024-07-26 11:17:28.788680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.436 qpair failed and we were unable to recover it. 00:29:09.436 [2024-07-26 11:17:28.789173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.436 [2024-07-26 11:17:28.789204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.436 qpair failed and we were unable to recover it. 00:29:09.436 [2024-07-26 11:17:28.789684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.436 [2024-07-26 11:17:28.789714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.436 qpair failed and we were unable to recover it. 00:29:09.436 [2024-07-26 11:17:28.790272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.436 [2024-07-26 11:17:28.790303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.436 qpair failed and we were unable to recover it. 00:29:09.437 [2024-07-26 11:17:28.791097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.437 [2024-07-26 11:17:28.791131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.437 qpair failed and we were unable to recover it. 00:29:09.437 [2024-07-26 11:17:28.791679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.437 [2024-07-26 11:17:28.791715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.437 qpair failed and we were unable to recover it. 00:29:09.437 [2024-07-26 11:17:28.792270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.437 [2024-07-26 11:17:28.792302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.437 qpair failed and we were unable to recover it. 00:29:09.437 [2024-07-26 11:17:28.792807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.437 [2024-07-26 11:17:28.792837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.437 qpair failed and we were unable to recover it. 00:29:09.437 [2024-07-26 11:17:28.793371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.437 [2024-07-26 11:17:28.793402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.437 qpair failed and we were unable to recover it. 00:29:09.437 [2024-07-26 11:17:28.793931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.437 [2024-07-26 11:17:28.793946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.437 qpair failed and we were unable to recover it. 00:29:09.437 [2024-07-26 11:17:28.794405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.437 [2024-07-26 11:17:28.794420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.437 qpair failed and we were unable to recover it. 00:29:09.437 [2024-07-26 11:17:28.794864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.437 [2024-07-26 11:17:28.794879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.437 qpair failed and we were unable to recover it. 00:29:09.437 [2024-07-26 11:17:28.795334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.437 [2024-07-26 11:17:28.795365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.437 qpair failed and we were unable to recover it. 00:29:09.437 [2024-07-26 11:17:28.795799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.437 [2024-07-26 11:17:28.795829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.437 qpair failed and we were unable to recover it. 00:29:09.437 [2024-07-26 11:17:28.796234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.437 [2024-07-26 11:17:28.796249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.437 qpair failed and we were unable to recover it. 00:29:09.437 [2024-07-26 11:17:28.796747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.437 [2024-07-26 11:17:28.796762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.437 qpair failed and we were unable to recover it. 00:29:09.437 [2024-07-26 11:17:28.797217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.437 [2024-07-26 11:17:28.797248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.437 qpair failed and we were unable to recover it. 00:29:09.437 [2024-07-26 11:17:28.797720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.437 [2024-07-26 11:17:28.797734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.437 qpair failed and we were unable to recover it. 00:29:09.437 [2024-07-26 11:17:28.798194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.437 [2024-07-26 11:17:28.798210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.437 qpair failed and we were unable to recover it. 00:29:09.437 [2024-07-26 11:17:28.798740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.437 [2024-07-26 11:17:28.798770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.437 qpair failed and we were unable to recover it. 00:29:09.437 [2024-07-26 11:17:28.799325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.437 [2024-07-26 11:17:28.799340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.437 qpair failed and we were unable to recover it. 00:29:09.437 [2024-07-26 11:17:28.799864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.437 [2024-07-26 11:17:28.799879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.437 qpair failed and we were unable to recover it. 00:29:09.437 [2024-07-26 11:17:28.800382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.437 [2024-07-26 11:17:28.800413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.437 qpair failed and we were unable to recover it. 00:29:09.437 [2024-07-26 11:17:28.800905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.437 [2024-07-26 11:17:28.800935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.437 qpair failed and we were unable to recover it. 00:29:09.437 [2024-07-26 11:17:28.801415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.437 [2024-07-26 11:17:28.801452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.437 qpair failed and we were unable to recover it. 00:29:09.437 [2024-07-26 11:17:28.801898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.437 [2024-07-26 11:17:28.801927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.437 qpair failed and we were unable to recover it. 00:29:09.437 [2024-07-26 11:17:28.802412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.437 [2024-07-26 11:17:28.802444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.437 qpair failed and we were unable to recover it. 00:29:09.437 [2024-07-26 11:17:28.802945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.437 [2024-07-26 11:17:28.802976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.437 qpair failed and we were unable to recover it. 00:29:09.437 [2024-07-26 11:17:28.803452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.437 [2024-07-26 11:17:28.803467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.437 qpair failed and we were unable to recover it. 00:29:09.437 [2024-07-26 11:17:28.803973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.437 [2024-07-26 11:17:28.804003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.437 qpair failed and we were unable to recover it. 00:29:09.437 [2024-07-26 11:17:28.804442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.437 [2024-07-26 11:17:28.804474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.437 qpair failed and we were unable to recover it. 00:29:09.437 [2024-07-26 11:17:28.804975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.437 [2024-07-26 11:17:28.805006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.437 qpair failed and we were unable to recover it. 00:29:09.437 [2024-07-26 11:17:28.805575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.437 [2024-07-26 11:17:28.805607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.437 qpair failed and we were unable to recover it. 00:29:09.437 [2024-07-26 11:17:28.806092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.437 [2024-07-26 11:17:28.806124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.437 qpair failed and we were unable to recover it. 00:29:09.437 [2024-07-26 11:17:28.806711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.437 [2024-07-26 11:17:28.806741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.437 qpair failed and we were unable to recover it. 00:29:09.437 [2024-07-26 11:17:28.807145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.437 [2024-07-26 11:17:28.807161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.437 qpair failed and we were unable to recover it. 00:29:09.437 [2024-07-26 11:17:28.807610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.437 [2024-07-26 11:17:28.807640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.437 qpair failed and we were unable to recover it. 00:29:09.437 [2024-07-26 11:17:28.808118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.437 [2024-07-26 11:17:28.808160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.437 qpair failed and we were unable to recover it. 00:29:09.437 [2024-07-26 11:17:28.808693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.437 [2024-07-26 11:17:28.808724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.437 qpair failed and we were unable to recover it. 00:29:09.437 [2024-07-26 11:17:28.809284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.437 [2024-07-26 11:17:28.809314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.437 qpair failed and we were unable to recover it. 00:29:09.437 [2024-07-26 11:17:28.809851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.437 [2024-07-26 11:17:28.809881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.437 qpair failed and we were unable to recover it. 00:29:09.437 [2024-07-26 11:17:28.810379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.438 [2024-07-26 11:17:28.810411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.438 qpair failed and we were unable to recover it. 00:29:09.438 [2024-07-26 11:17:28.810946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.438 [2024-07-26 11:17:28.810976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.438 qpair failed and we were unable to recover it. 00:29:09.438 [2024-07-26 11:17:28.811559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.438 [2024-07-26 11:17:28.811590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.438 qpair failed and we were unable to recover it. 00:29:09.438 [2024-07-26 11:17:28.811799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.438 [2024-07-26 11:17:28.811830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.438 qpair failed and we were unable to recover it. 00:29:09.438 [2024-07-26 11:17:28.812234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.438 [2024-07-26 11:17:28.812271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.438 qpair failed and we were unable to recover it. 00:29:09.438 [2024-07-26 11:17:28.812740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.438 [2024-07-26 11:17:28.812770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.438 qpair failed and we were unable to recover it. 00:29:09.438 [2024-07-26 11:17:28.813285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.438 [2024-07-26 11:17:28.813317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.438 qpair failed and we were unable to recover it. 00:29:09.438 [2024-07-26 11:17:28.813851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.438 [2024-07-26 11:17:28.813882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.438 qpair failed and we were unable to recover it. 00:29:09.438 [2024-07-26 11:17:28.814466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.438 [2024-07-26 11:17:28.814498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.438 qpair failed and we were unable to recover it. 00:29:09.438 [2024-07-26 11:17:28.815031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.438 [2024-07-26 11:17:28.815080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.438 qpair failed and we were unable to recover it. 00:29:09.438 [2024-07-26 11:17:28.815628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.438 [2024-07-26 11:17:28.815659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.438 qpair failed and we were unable to recover it. 00:29:09.438 [2024-07-26 11:17:28.816212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.438 [2024-07-26 11:17:28.816243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.438 qpair failed and we were unable to recover it. 00:29:09.438 [2024-07-26 11:17:28.816677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.438 [2024-07-26 11:17:28.816707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.438 qpair failed and we were unable to recover it. 00:29:09.438 [2024-07-26 11:17:28.817192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.438 [2024-07-26 11:17:28.817207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.438 qpair failed and we were unable to recover it. 00:29:09.438 [2024-07-26 11:17:28.817738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.438 [2024-07-26 11:17:28.817768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.438 qpair failed and we were unable to recover it. 00:29:09.438 [2024-07-26 11:17:28.818253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.438 [2024-07-26 11:17:28.818286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.438 qpair failed and we were unable to recover it. 00:29:09.438 [2024-07-26 11:17:28.818849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.438 [2024-07-26 11:17:28.818880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.438 qpair failed and we were unable to recover it. 00:29:09.438 [2024-07-26 11:17:28.819141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.438 [2024-07-26 11:17:28.819172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.438 qpair failed and we were unable to recover it. 00:29:09.438 [2024-07-26 11:17:28.819686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.438 [2024-07-26 11:17:28.819717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.438 qpair failed and we were unable to recover it. 00:29:09.438 [2024-07-26 11:17:28.820273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.438 [2024-07-26 11:17:28.820304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.438 qpair failed and we were unable to recover it. 00:29:09.438 [2024-07-26 11:17:28.820608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.438 [2024-07-26 11:17:28.820639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.438 qpair failed and we were unable to recover it. 00:29:09.438 [2024-07-26 11:17:28.821217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.438 [2024-07-26 11:17:28.821249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.438 qpair failed and we were unable to recover it. 00:29:09.438 [2024-07-26 11:17:28.821757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.438 [2024-07-26 11:17:28.821788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.438 qpair failed and we were unable to recover it. 00:29:09.438 [2024-07-26 11:17:28.822314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.438 [2024-07-26 11:17:28.822329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.438 qpair failed and we were unable to recover it. 00:29:09.438 [2024-07-26 11:17:28.822764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.438 [2024-07-26 11:17:28.822796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.438 qpair failed and we were unable to recover it. 00:29:09.438 [2024-07-26 11:17:28.823352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.438 [2024-07-26 11:17:28.823383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.438 qpair failed and we were unable to recover it. 00:29:09.438 [2024-07-26 11:17:28.823919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.438 [2024-07-26 11:17:28.823950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.438 qpair failed and we were unable to recover it. 00:29:09.438 [2024-07-26 11:17:28.824507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.438 [2024-07-26 11:17:28.824540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.438 qpair failed and we were unable to recover it. 00:29:09.438 [2024-07-26 11:17:28.825015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.438 [2024-07-26 11:17:28.825052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.438 qpair failed and we were unable to recover it. 00:29:09.438 [2024-07-26 11:17:28.825537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.438 [2024-07-26 11:17:28.825568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.438 qpair failed and we were unable to recover it. 00:29:09.438 [2024-07-26 11:17:28.826128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.438 [2024-07-26 11:17:28.826159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.438 qpair failed and we were unable to recover it. 00:29:09.438 [2024-07-26 11:17:28.826720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.438 [2024-07-26 11:17:28.826751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.438 qpair failed and we were unable to recover it. 00:29:09.438 [2024-07-26 11:17:28.827227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.438 [2024-07-26 11:17:28.827258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.438 qpair failed and we were unable to recover it. 00:29:09.438 [2024-07-26 11:17:28.827517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.438 [2024-07-26 11:17:28.827548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.438 qpair failed and we were unable to recover it. 00:29:09.438 [2024-07-26 11:17:28.828103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.438 [2024-07-26 11:17:28.828135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.438 qpair failed and we were unable to recover it. 00:29:09.438 [2024-07-26 11:17:28.828669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.438 [2024-07-26 11:17:28.828700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.438 qpair failed and we were unable to recover it. 00:29:09.438 [2024-07-26 11:17:28.829192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.438 [2024-07-26 11:17:28.829223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.438 qpair failed and we were unable to recover it. 00:29:09.438 [2024-07-26 11:17:28.829726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.438 [2024-07-26 11:17:28.829757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.439 qpair failed and we were unable to recover it. 00:29:09.439 [2024-07-26 11:17:28.830250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.439 [2024-07-26 11:17:28.830281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.439 qpair failed and we were unable to recover it. 00:29:09.439 [2024-07-26 11:17:28.830822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.439 [2024-07-26 11:17:28.830853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.439 qpair failed and we were unable to recover it. 00:29:09.439 [2024-07-26 11:17:28.831323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.439 [2024-07-26 11:17:28.831354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.439 qpair failed and we were unable to recover it. 00:29:09.439 [2024-07-26 11:17:28.831910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.439 [2024-07-26 11:17:28.831940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.439 qpair failed and we were unable to recover it. 00:29:09.439 [2024-07-26 11:17:28.832448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.439 [2024-07-26 11:17:28.832479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.439 qpair failed and we were unable to recover it. 00:29:09.439 [2024-07-26 11:17:28.833065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.439 [2024-07-26 11:17:28.833096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.439 qpair failed and we were unable to recover it. 00:29:09.439 [2024-07-26 11:17:28.833640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.439 [2024-07-26 11:17:28.833681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.439 qpair failed and we were unable to recover it. 00:29:09.439 [2024-07-26 11:17:28.834227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.439 [2024-07-26 11:17:28.834259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.439 qpair failed and we were unable to recover it. 00:29:09.439 [2024-07-26 11:17:28.834741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.439 [2024-07-26 11:17:28.834771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.439 qpair failed and we were unable to recover it. 00:29:09.439 [2024-07-26 11:17:28.835328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.439 [2024-07-26 11:17:28.835359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.439 qpair failed and we were unable to recover it. 00:29:09.439 [2024-07-26 11:17:28.835839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.439 [2024-07-26 11:17:28.835869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.439 qpair failed and we were unable to recover it. 00:29:09.439 [2024-07-26 11:17:28.836344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.439 [2024-07-26 11:17:28.836376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.439 qpair failed and we were unable to recover it. 00:29:09.439 [2024-07-26 11:17:28.836852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.439 [2024-07-26 11:17:28.836866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.439 qpair failed and we were unable to recover it. 00:29:09.439 [2024-07-26 11:17:28.837386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.439 [2024-07-26 11:17:28.837418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.439 qpair failed and we were unable to recover it. 00:29:09.439 [2024-07-26 11:17:28.837697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.439 [2024-07-26 11:17:28.837728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.439 qpair failed and we were unable to recover it. 00:29:09.439 [2024-07-26 11:17:28.838280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.439 [2024-07-26 11:17:28.838312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.439 qpair failed and we were unable to recover it. 00:29:09.439 [2024-07-26 11:17:28.838803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.439 [2024-07-26 11:17:28.838834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.439 qpair failed and we were unable to recover it. 00:29:09.439 [2024-07-26 11:17:28.839336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.439 [2024-07-26 11:17:28.839368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.439 qpair failed and we were unable to recover it. 00:29:09.439 [2024-07-26 11:17:28.839906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.439 [2024-07-26 11:17:28.839937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.439 qpair failed and we were unable to recover it. 00:29:09.439 [2024-07-26 11:17:28.840437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.439 [2024-07-26 11:17:28.840468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.439 qpair failed and we were unable to recover it. 00:29:09.439 [2024-07-26 11:17:28.841009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.439 [2024-07-26 11:17:28.841040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.439 qpair failed and we were unable to recover it. 00:29:09.439 [2024-07-26 11:17:28.841526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.439 [2024-07-26 11:17:28.841557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.439 qpair failed and we were unable to recover it. 00:29:09.439 [2024-07-26 11:17:28.842116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.439 [2024-07-26 11:17:28.842148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.439 qpair failed and we were unable to recover it. 00:29:09.439 [2024-07-26 11:17:28.842686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.439 [2024-07-26 11:17:28.842716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.439 qpair failed and we were unable to recover it. 00:29:09.439 [2024-07-26 11:17:28.843178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.439 [2024-07-26 11:17:28.843193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.439 qpair failed and we were unable to recover it. 00:29:09.439 [2024-07-26 11:17:28.843695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.439 [2024-07-26 11:17:28.843726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.439 qpair failed and we were unable to recover it. 00:29:09.439 [2024-07-26 11:17:28.844235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.439 [2024-07-26 11:17:28.844267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.439 qpair failed and we were unable to recover it. 00:29:09.439 [2024-07-26 11:17:28.844834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.439 [2024-07-26 11:17:28.844865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.439 qpair failed and we were unable to recover it. 00:29:09.439 [2024-07-26 11:17:28.845412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.439 [2024-07-26 11:17:28.845443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.439 qpair failed and we were unable to recover it. 00:29:09.439 [2024-07-26 11:17:28.846004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.439 [2024-07-26 11:17:28.846035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.439 qpair failed and we were unable to recover it. 00:29:09.439 [2024-07-26 11:17:28.846544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.439 [2024-07-26 11:17:28.846575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.439 qpair failed and we were unable to recover it. 00:29:09.439 [2024-07-26 11:17:28.847009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.439 [2024-07-26 11:17:28.847040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.439 qpair failed and we were unable to recover it. 00:29:09.439 [2024-07-26 11:17:28.847605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.439 [2024-07-26 11:17:28.847636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.439 qpair failed and we were unable to recover it. 00:29:09.439 [2024-07-26 11:17:28.848201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.439 [2024-07-26 11:17:28.848234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.439 qpair failed and we were unable to recover it. 00:29:09.439 [2024-07-26 11:17:28.848580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.439 [2024-07-26 11:17:28.848611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.439 qpair failed and we were unable to recover it. 00:29:09.439 [2024-07-26 11:17:28.849112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.439 [2024-07-26 11:17:28.849143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.439 qpair failed and we were unable to recover it. 00:29:09.439 [2024-07-26 11:17:28.849697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.439 [2024-07-26 11:17:28.849728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.439 qpair failed and we were unable to recover it. 00:29:09.440 [2024-07-26 11:17:28.850260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.440 [2024-07-26 11:17:28.850291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.440 qpair failed and we were unable to recover it. 00:29:09.440 [2024-07-26 11:17:28.850721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.440 [2024-07-26 11:17:28.850752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.440 qpair failed and we were unable to recover it. 00:29:09.440 [2024-07-26 11:17:28.851310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.440 [2024-07-26 11:17:28.851341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.440 qpair failed and we were unable to recover it. 00:29:09.440 [2024-07-26 11:17:28.851910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.440 [2024-07-26 11:17:28.851941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.440 qpair failed and we were unable to recover it. 00:29:09.440 [2024-07-26 11:17:28.852444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.440 [2024-07-26 11:17:28.852476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.440 qpair failed and we were unable to recover it. 00:29:09.440 [2024-07-26 11:17:28.852965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.440 [2024-07-26 11:17:28.852995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.440 qpair failed and we were unable to recover it. 00:29:09.440 [2024-07-26 11:17:28.853565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.440 [2024-07-26 11:17:28.853597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.440 qpair failed and we were unable to recover it. 00:29:09.440 [2024-07-26 11:17:28.854155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.440 [2024-07-26 11:17:28.854186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.440 qpair failed and we were unable to recover it. 00:29:09.440 [2024-07-26 11:17:28.854662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.440 [2024-07-26 11:17:28.854692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.440 qpair failed and we were unable to recover it. 00:29:09.440 [2024-07-26 11:17:28.855275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.440 [2024-07-26 11:17:28.855311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.440 qpair failed and we were unable to recover it. 00:29:09.440 [2024-07-26 11:17:28.855788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.440 [2024-07-26 11:17:28.855819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.440 qpair failed and we were unable to recover it. 00:29:09.440 [2024-07-26 11:17:28.856380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.440 [2024-07-26 11:17:28.856395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.440 qpair failed and we were unable to recover it. 00:29:09.440 [2024-07-26 11:17:28.856921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.440 [2024-07-26 11:17:28.856936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.440 qpair failed and we were unable to recover it. 00:29:09.440 [2024-07-26 11:17:28.857433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.440 [2024-07-26 11:17:28.857449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.440 qpair failed and we were unable to recover it. 00:29:09.440 [2024-07-26 11:17:28.857826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.440 [2024-07-26 11:17:28.857841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.440 qpair failed and we were unable to recover it. 00:29:09.440 [2024-07-26 11:17:28.858286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.440 [2024-07-26 11:17:28.858301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.440 qpair failed and we were unable to recover it. 00:29:09.440 [2024-07-26 11:17:28.858769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.440 [2024-07-26 11:17:28.858783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.440 qpair failed and we were unable to recover it. 00:29:09.440 [2024-07-26 11:17:28.859258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.440 [2024-07-26 11:17:28.859273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.440 qpair failed and we were unable to recover it. 00:29:09.440 [2024-07-26 11:17:28.859742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.440 [2024-07-26 11:17:28.859773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.440 qpair failed and we were unable to recover it. 00:29:09.440 [2024-07-26 11:17:28.860238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.440 [2024-07-26 11:17:28.860269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.440 qpair failed and we were unable to recover it. 00:29:09.440 [2024-07-26 11:17:28.860828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.440 [2024-07-26 11:17:28.860859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.440 qpair failed and we were unable to recover it. 00:29:09.440 [2024-07-26 11:17:28.861414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.440 [2024-07-26 11:17:28.861446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.440 qpair failed and we were unable to recover it. 00:29:09.440 [2024-07-26 11:17:28.862027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.440 [2024-07-26 11:17:28.862066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.440 qpair failed and we were unable to recover it. 00:29:09.440 [2024-07-26 11:17:28.862579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.440 [2024-07-26 11:17:28.862610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.440 qpair failed and we were unable to recover it. 00:29:09.440 [2024-07-26 11:17:28.862909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.440 [2024-07-26 11:17:28.862941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.440 qpair failed and we were unable to recover it. 00:29:09.440 [2024-07-26 11:17:28.863472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.440 [2024-07-26 11:17:28.863502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.440 qpair failed and we were unable to recover it. 00:29:09.440 [2024-07-26 11:17:28.864006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.440 [2024-07-26 11:17:28.864036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.440 qpair failed and we were unable to recover it. 00:29:09.440 [2024-07-26 11:17:28.864585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.440 [2024-07-26 11:17:28.864616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.440 qpair failed and we were unable to recover it. 00:29:09.440 [2024-07-26 11:17:28.865119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.440 [2024-07-26 11:17:28.865150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.440 qpair failed and we were unable to recover it. 00:29:09.440 [2024-07-26 11:17:28.865621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.440 [2024-07-26 11:17:28.865651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.440 qpair failed and we were unable to recover it. 00:29:09.440 [2024-07-26 11:17:28.866208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.440 [2024-07-26 11:17:28.866240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.440 qpair failed and we were unable to recover it. 00:29:09.440 [2024-07-26 11:17:28.866749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.441 [2024-07-26 11:17:28.866779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.441 qpair failed and we were unable to recover it. 00:29:09.441 [2024-07-26 11:17:28.867326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.441 [2024-07-26 11:17:28.867357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.441 qpair failed and we were unable to recover it. 00:29:09.441 [2024-07-26 11:17:28.867842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.441 [2024-07-26 11:17:28.867872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.441 qpair failed and we were unable to recover it. 00:29:09.441 [2024-07-26 11:17:28.868410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.441 [2024-07-26 11:17:28.868442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.441 qpair failed and we were unable to recover it. 00:29:09.441 [2024-07-26 11:17:28.868997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.441 [2024-07-26 11:17:28.869027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.441 qpair failed and we were unable to recover it. 00:29:09.441 [2024-07-26 11:17:28.869524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.441 [2024-07-26 11:17:28.869555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.441 qpair failed and we were unable to recover it. 00:29:09.441 [2024-07-26 11:17:28.870118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.441 [2024-07-26 11:17:28.870150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.441 qpair failed and we were unable to recover it. 00:29:09.441 [2024-07-26 11:17:28.870648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.441 [2024-07-26 11:17:28.870679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.441 qpair failed and we were unable to recover it. 00:29:09.441 [2024-07-26 11:17:28.871241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.441 [2024-07-26 11:17:28.871272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.441 qpair failed and we were unable to recover it. 00:29:09.441 [2024-07-26 11:17:28.871800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.441 [2024-07-26 11:17:28.871815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.441 qpair failed and we were unable to recover it. 00:29:09.441 [2024-07-26 11:17:28.872231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.441 [2024-07-26 11:17:28.872246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.441 qpair failed and we were unable to recover it. 00:29:09.441 [2024-07-26 11:17:28.872693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.441 [2024-07-26 11:17:28.872723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.441 qpair failed and we were unable to recover it. 00:29:09.441 [2024-07-26 11:17:28.873211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.441 [2024-07-26 11:17:28.873242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.441 qpair failed and we were unable to recover it. 00:29:09.441 [2024-07-26 11:17:28.873721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.441 [2024-07-26 11:17:28.873751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.441 qpair failed and we were unable to recover it. 00:29:09.441 [2024-07-26 11:17:28.874234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.441 [2024-07-26 11:17:28.874265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.441 qpair failed and we were unable to recover it. 00:29:09.441 [2024-07-26 11:17:28.874727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.441 [2024-07-26 11:17:28.874742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.441 qpair failed and we were unable to recover it. 00:29:09.441 [2024-07-26 11:17:28.875188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.441 [2024-07-26 11:17:28.875219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.441 qpair failed and we were unable to recover it. 00:29:09.441 [2024-07-26 11:17:28.875697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.441 [2024-07-26 11:17:28.875727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.441 qpair failed and we were unable to recover it. 00:29:09.441 [2024-07-26 11:17:28.876237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.441 [2024-07-26 11:17:28.876275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.441 qpair failed and we were unable to recover it. 00:29:09.441 [2024-07-26 11:17:28.876836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.441 [2024-07-26 11:17:28.876867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.441 qpair failed and we were unable to recover it. 00:29:09.441 [2024-07-26 11:17:28.877454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.441 [2024-07-26 11:17:28.877485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.441 qpair failed and we were unable to recover it. 00:29:09.441 [2024-07-26 11:17:28.877964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.441 [2024-07-26 11:17:28.877994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.441 qpair failed and we were unable to recover it. 00:29:09.441 [2024-07-26 11:17:28.878554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.441 [2024-07-26 11:17:28.878586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.441 qpair failed and we were unable to recover it. 00:29:09.441 [2024-07-26 11:17:28.879094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.441 [2024-07-26 11:17:28.879126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.441 qpair failed and we were unable to recover it. 00:29:09.441 [2024-07-26 11:17:28.879646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.441 [2024-07-26 11:17:28.879676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.441 qpair failed and we were unable to recover it. 00:29:09.441 [2024-07-26 11:17:28.880129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.441 [2024-07-26 11:17:28.880160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.441 qpair failed and we were unable to recover it. 00:29:09.441 [2024-07-26 11:17:28.880718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.441 [2024-07-26 11:17:28.880749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.441 qpair failed and we were unable to recover it. 00:29:09.441 [2024-07-26 11:17:28.881179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.441 [2024-07-26 11:17:28.881211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.441 qpair failed and we were unable to recover it. 00:29:09.441 [2024-07-26 11:17:28.881771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.441 [2024-07-26 11:17:28.881801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.441 qpair failed and we were unable to recover it. 00:29:09.441 [2024-07-26 11:17:28.882384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.441 [2024-07-26 11:17:28.882416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.441 qpair failed and we were unable to recover it. 00:29:09.441 [2024-07-26 11:17:28.882907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.441 [2024-07-26 11:17:28.882937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.441 qpair failed and we were unable to recover it. 00:29:09.441 [2024-07-26 11:17:28.883493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.441 [2024-07-26 11:17:28.883525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.441 qpair failed and we were unable to recover it. 00:29:09.441 [2024-07-26 11:17:28.884011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.441 [2024-07-26 11:17:28.884041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.441 qpair failed and we were unable to recover it. 00:29:09.441 [2024-07-26 11:17:28.884536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.441 [2024-07-26 11:17:28.884575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.441 qpair failed and we were unable to recover it. 00:29:09.441 [2024-07-26 11:17:28.885097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.441 [2024-07-26 11:17:28.885112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.441 qpair failed and we were unable to recover it. 00:29:09.441 [2024-07-26 11:17:28.885658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.441 [2024-07-26 11:17:28.885688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.441 qpair failed and we were unable to recover it. 00:29:09.441 [2024-07-26 11:17:28.886189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.441 [2024-07-26 11:17:28.886220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.441 qpair failed and we were unable to recover it. 00:29:09.442 [2024-07-26 11:17:28.886774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.442 [2024-07-26 11:17:28.886805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.442 qpair failed and we were unable to recover it. 00:29:09.442 [2024-07-26 11:17:28.887316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.442 [2024-07-26 11:17:28.887347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.442 qpair failed and we were unable to recover it. 00:29:09.442 [2024-07-26 11:17:28.887841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.442 [2024-07-26 11:17:28.887873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.442 qpair failed and we were unable to recover it. 00:29:09.442 [2024-07-26 11:17:28.888436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.442 [2024-07-26 11:17:28.888467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.442 qpair failed and we were unable to recover it. 00:29:09.442 [2024-07-26 11:17:28.888948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.442 [2024-07-26 11:17:28.888979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.442 qpair failed and we were unable to recover it. 00:29:09.442 [2024-07-26 11:17:28.889412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.442 [2024-07-26 11:17:28.889443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.442 qpair failed and we were unable to recover it. 00:29:09.442 [2024-07-26 11:17:28.890000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.442 [2024-07-26 11:17:28.890031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.442 qpair failed and we were unable to recover it. 00:29:09.442 [2024-07-26 11:17:28.890574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.442 [2024-07-26 11:17:28.890605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.442 qpair failed and we were unable to recover it. 00:29:09.442 [2024-07-26 11:17:28.891166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.442 [2024-07-26 11:17:28.891199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.442 qpair failed and we were unable to recover it. 00:29:09.442 [2024-07-26 11:17:28.891751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.442 [2024-07-26 11:17:28.891765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.442 qpair failed and we were unable to recover it. 00:29:09.442 [2024-07-26 11:17:28.892264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.442 [2024-07-26 11:17:28.892280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.442 qpair failed and we were unable to recover it. 00:29:09.442 [2024-07-26 11:17:28.892825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.442 [2024-07-26 11:17:28.892855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.442 qpair failed and we were unable to recover it. 00:29:09.442 [2024-07-26 11:17:28.893354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.442 [2024-07-26 11:17:28.893369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.442 qpair failed and we were unable to recover it. 00:29:09.442 [2024-07-26 11:17:28.893947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.442 [2024-07-26 11:17:28.893978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.442 qpair failed and we were unable to recover it. 00:29:09.442 [2024-07-26 11:17:28.894546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.442 [2024-07-26 11:17:28.894578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.442 qpair failed and we were unable to recover it. 00:29:09.442 [2024-07-26 11:17:28.895064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.442 [2024-07-26 11:17:28.895095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.442 qpair failed and we were unable to recover it. 00:29:09.442 [2024-07-26 11:17:28.895462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.442 [2024-07-26 11:17:28.895493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.442 qpair failed and we were unable to recover it. 00:29:09.442 [2024-07-26 11:17:28.896590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.442 [2024-07-26 11:17:28.896617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.442 qpair failed and we were unable to recover it. 00:29:09.442 [2024-07-26 11:17:28.897135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.442 [2024-07-26 11:17:28.897152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.442 qpair failed and we were unable to recover it. 00:29:09.442 [2024-07-26 11:17:28.897612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.442 [2024-07-26 11:17:28.897628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.442 qpair failed and we were unable to recover it. 00:29:09.442 [2024-07-26 11:17:28.898097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.442 [2024-07-26 11:17:28.898112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.442 qpair failed and we were unable to recover it. 00:29:09.442 [2024-07-26 11:17:28.898618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.442 [2024-07-26 11:17:28.898656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.442 qpair failed and we were unable to recover it. 00:29:09.442 [2024-07-26 11:17:28.899210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.442 [2024-07-26 11:17:28.899226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.442 qpair failed and we were unable to recover it. 00:29:09.442 [2024-07-26 11:17:28.899676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.442 [2024-07-26 11:17:28.899719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.442 qpair failed and we were unable to recover it. 00:29:09.442 [2024-07-26 11:17:28.900206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.442 [2024-07-26 11:17:28.900238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.442 qpair failed and we were unable to recover it. 00:29:09.442 [2024-07-26 11:17:28.900656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.442 [2024-07-26 11:17:28.900686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.442 qpair failed and we were unable to recover it. 00:29:09.442 [2024-07-26 11:17:28.901470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.442 [2024-07-26 11:17:28.901487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.442 qpair failed and we were unable to recover it. 00:29:09.442 [2024-07-26 11:17:28.901944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.442 [2024-07-26 11:17:28.901959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.442 qpair failed and we were unable to recover it. 00:29:09.442 [2024-07-26 11:17:28.902465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.442 [2024-07-26 11:17:28.902480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.442 qpair failed and we were unable to recover it. 00:29:09.442 [2024-07-26 11:17:28.902926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.442 [2024-07-26 11:17:28.902941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.442 qpair failed and we were unable to recover it. 00:29:09.442 [2024-07-26 11:17:28.903168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.442 [2024-07-26 11:17:28.903183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.442 qpair failed and we were unable to recover it. 00:29:09.442 [2024-07-26 11:17:28.903636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.442 [2024-07-26 11:17:28.903651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.442 qpair failed and we were unable to recover it. 00:29:09.442 [2024-07-26 11:17:28.904083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.442 [2024-07-26 11:17:28.904115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.442 qpair failed and we were unable to recover it. 00:29:09.711 [2024-07-26 11:17:28.904611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-07-26 11:17:28.904650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-07-26 11:17:28.905157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-07-26 11:17:28.905191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-07-26 11:17:28.905703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-07-26 11:17:28.905735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-07-26 11:17:28.906274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-07-26 11:17:28.906305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-07-26 11:17:28.906808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-07-26 11:17:28.906839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-07-26 11:17:28.907285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-07-26 11:17:28.907317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-07-26 11:17:28.907728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-07-26 11:17:28.907759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-07-26 11:17:28.908287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-07-26 11:17:28.908318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-07-26 11:17:28.908852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-07-26 11:17:28.908883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-07-26 11:17:28.909373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-07-26 11:17:28.909404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-07-26 11:17:28.909988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-07-26 11:17:28.910019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-07-26 11:17:28.910218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-07-26 11:17:28.910249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-07-26 11:17:28.910732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-07-26 11:17:28.910747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-07-26 11:17:28.910966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-07-26 11:17:28.910982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-07-26 11:17:28.911480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-07-26 11:17:28.911495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-07-26 11:17:28.911951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-07-26 11:17:28.911983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-07-26 11:17:28.912459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-07-26 11:17:28.912491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-07-26 11:17:28.913000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-07-26 11:17:28.913031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-07-26 11:17:28.913474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.711 [2024-07-26 11:17:28.913506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.711 qpair failed and we were unable to recover it. 00:29:09.711 [2024-07-26 11:17:28.913987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-07-26 11:17:28.914018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-07-26 11:17:28.914583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-07-26 11:17:28.914615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-07-26 11:17:28.915056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-07-26 11:17:28.915088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-07-26 11:17:28.915649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-07-26 11:17:28.915679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-07-26 11:17:28.916032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-07-26 11:17:28.916080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-07-26 11:17:28.916545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-07-26 11:17:28.916575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-07-26 11:17:28.917063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-07-26 11:17:28.917095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-07-26 11:17:28.917596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-07-26 11:17:28.917627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-07-26 11:17:28.918107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-07-26 11:17:28.918139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-07-26 11:17:28.918618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-07-26 11:17:28.918654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-07-26 11:17:28.919206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-07-26 11:17:28.919238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-07-26 11:17:28.919699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-07-26 11:17:28.919714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-07-26 11:17:28.920032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-07-26 11:17:28.920050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-07-26 11:17:28.920498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-07-26 11:17:28.920529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-07-26 11:17:28.921029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-07-26 11:17:28.921066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-07-26 11:17:28.921599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-07-26 11:17:28.921630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-07-26 11:17:28.922119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-07-26 11:17:28.922151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-07-26 11:17:28.922566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-07-26 11:17:28.922597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-07-26 11:17:28.923083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-07-26 11:17:28.923115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-07-26 11:17:28.923666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-07-26 11:17:28.923697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-07-26 11:17:28.924233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-07-26 11:17:28.924264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-07-26 11:17:28.924805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-07-26 11:17:28.924836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-07-26 11:17:28.925315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-07-26 11:17:28.925346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-07-26 11:17:28.925855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-07-26 11:17:28.925886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-07-26 11:17:28.926335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-07-26 11:17:28.926367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-07-26 11:17:28.926864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-07-26 11:17:28.926894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-07-26 11:17:28.927455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-07-26 11:17:28.927486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-07-26 11:17:28.928064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-07-26 11:17:28.928095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-07-26 11:17:28.928643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-07-26 11:17:28.928674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-07-26 11:17:28.929153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-07-26 11:17:28.929185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-07-26 11:17:28.929744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-07-26 11:17:28.929774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-07-26 11:17:28.930223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-07-26 11:17:28.930255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-07-26 11:17:28.930749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-07-26 11:17:28.930765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-07-26 11:17:28.931230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-07-26 11:17:28.931245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-07-26 11:17:28.931708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-07-26 11:17:28.931738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-07-26 11:17:28.932162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-07-26 11:17:28.932194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-07-26 11:17:28.932762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.712 [2024-07-26 11:17:28.932794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.712 qpair failed and we were unable to recover it. 00:29:09.712 [2024-07-26 11:17:28.933334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-07-26 11:17:28.933366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-07-26 11:17:28.933928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-07-26 11:17:28.933959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-07-26 11:17:28.934429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-07-26 11:17:28.934461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-07-26 11:17:28.934995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-07-26 11:17:28.935026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-07-26 11:17:28.935596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-07-26 11:17:28.935627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-07-26 11:17:28.936149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-07-26 11:17:28.936181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-07-26 11:17:28.936727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-07-26 11:17:28.936758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-07-26 11:17:28.937237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-07-26 11:17:28.937253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-07-26 11:17:28.937719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-07-26 11:17:28.937750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-07-26 11:17:28.938233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-07-26 11:17:28.938265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-07-26 11:17:28.938812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-07-26 11:17:28.938843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-07-26 11:17:28.939433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-07-26 11:17:28.939466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-07-26 11:17:28.939988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-07-26 11:17:28.940036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-07-26 11:17:28.940606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-07-26 11:17:28.940636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-07-26 11:17:28.941129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-07-26 11:17:28.941161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-07-26 11:17:28.941697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-07-26 11:17:28.941728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-07-26 11:17:28.942264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-07-26 11:17:28.942296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-07-26 11:17:28.942799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-07-26 11:17:28.942834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-07-26 11:17:28.943367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-07-26 11:17:28.943407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-07-26 11:17:28.943816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-07-26 11:17:28.943846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-07-26 11:17:28.944355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-07-26 11:17:28.944387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-07-26 11:17:28.944889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-07-26 11:17:28.944920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-07-26 11:17:28.945343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-07-26 11:17:28.945383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-07-26 11:17:28.945834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-07-26 11:17:28.945865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-07-26 11:17:28.946425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-07-26 11:17:28.946456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-07-26 11:17:28.947013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-07-26 11:17:28.947050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-07-26 11:17:28.947489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-07-26 11:17:28.947520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-07-26 11:17:28.948000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-07-26 11:17:28.948031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-07-26 11:17:28.948467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-07-26 11:17:28.948498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-07-26 11:17:28.949052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-07-26 11:17:28.949068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-07-26 11:17:28.949574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-07-26 11:17:28.949605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-07-26 11:17:28.950187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-07-26 11:17:28.950219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-07-26 11:17:28.950503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-07-26 11:17:28.950534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-07-26 11:17:28.950989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-07-26 11:17:28.951020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-07-26 11:17:28.951505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-07-26 11:17:28.951536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-07-26 11:17:28.952012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-07-26 11:17:28.952056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-07-26 11:17:28.952617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-07-26 11:17:28.952648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.713 qpair failed and we were unable to recover it. 00:29:09.713 [2024-07-26 11:17:28.953155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.713 [2024-07-26 11:17:28.953187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-07-26 11:17:28.953678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-07-26 11:17:28.953708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-07-26 11:17:28.954274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-07-26 11:17:28.954306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-07-26 11:17:28.954719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-07-26 11:17:28.954748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-07-26 11:17:28.955261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-07-26 11:17:28.955293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-07-26 11:17:28.955824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-07-26 11:17:28.955854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-07-26 11:17:28.956419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-07-26 11:17:28.956451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-07-26 11:17:28.956878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-07-26 11:17:28.956909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-07-26 11:17:28.957465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-07-26 11:17:28.957496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-07-26 11:17:28.958031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-07-26 11:17:28.958073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-07-26 11:17:28.958651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-07-26 11:17:28.958682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-07-26 11:17:28.959241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-07-26 11:17:28.959273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-07-26 11:17:28.959817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-07-26 11:17:28.959848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-07-26 11:17:28.960383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-07-26 11:17:28.960414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-07-26 11:17:28.960972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-07-26 11:17:28.961002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-07-26 11:17:28.961490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-07-26 11:17:28.961522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-07-26 11:17:28.961991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-07-26 11:17:28.962006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-07-26 11:17:28.962518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-07-26 11:17:28.962550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-07-26 11:17:28.963134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-07-26 11:17:28.963166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-07-26 11:17:28.963424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-07-26 11:17:28.963454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-07-26 11:17:28.963931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-07-26 11:17:28.963961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-07-26 11:17:28.964520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-07-26 11:17:28.964550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-07-26 11:17:28.965052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-07-26 11:17:28.965067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-07-26 11:17:28.965576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-07-26 11:17:28.965607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-07-26 11:17:28.966021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-07-26 11:17:28.966060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-07-26 11:17:28.966600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-07-26 11:17:28.966630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-07-26 11:17:28.967190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-07-26 11:17:28.967223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-07-26 11:17:28.967777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-07-26 11:17:28.967807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-07-26 11:17:28.968240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-07-26 11:17:28.968272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-07-26 11:17:28.968837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-07-26 11:17:28.968868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-07-26 11:17:28.969205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-07-26 11:17:28.969236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-07-26 11:17:28.969740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-07-26 11:17:28.969770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-07-26 11:17:28.970329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-07-26 11:17:28.970360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-07-26 11:17:28.970892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-07-26 11:17:28.970923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-07-26 11:17:28.971454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-07-26 11:17:28.971498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-07-26 11:17:28.971960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-07-26 11:17:28.971991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-07-26 11:17:28.972352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-07-26 11:17:28.972383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-07-26 11:17:28.972942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.714 [2024-07-26 11:17:28.972973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.714 qpair failed and we were unable to recover it. 00:29:09.714 [2024-07-26 11:17:28.973182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-07-26 11:17:28.973214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.715 qpair failed and we were unable to recover it. 00:29:09.715 [2024-07-26 11:17:28.973628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-07-26 11:17:28.973658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.715 qpair failed and we were unable to recover it. 00:29:09.715 [2024-07-26 11:17:28.973987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-07-26 11:17:28.974018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.715 qpair failed and we were unable to recover it. 00:29:09.715 [2024-07-26 11:17:28.974377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-07-26 11:17:28.974408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.715 qpair failed and we were unable to recover it. 00:29:09.715 [2024-07-26 11:17:28.974963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-07-26 11:17:28.974999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.715 qpair failed and we were unable to recover it. 00:29:09.715 [2024-07-26 11:17:28.975478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-07-26 11:17:28.975510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.715 qpair failed and we were unable to recover it. 00:29:09.715 [2024-07-26 11:17:28.975863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-07-26 11:17:28.975894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.715 qpair failed and we were unable to recover it. 00:29:09.715 [2024-07-26 11:17:28.976407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-07-26 11:17:28.976439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.715 qpair failed and we were unable to recover it. 00:29:09.715 [2024-07-26 11:17:28.976917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-07-26 11:17:28.976947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.715 qpair failed and we were unable to recover it. 00:29:09.715 [2024-07-26 11:17:28.977483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-07-26 11:17:28.977514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.715 qpair failed and we were unable to recover it. 00:29:09.715 [2024-07-26 11:17:28.978013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-07-26 11:17:28.978052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.715 qpair failed and we were unable to recover it. 00:29:09.715 [2024-07-26 11:17:28.978472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-07-26 11:17:28.978503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.715 qpair failed and we were unable to recover it. 00:29:09.715 [2024-07-26 11:17:28.979035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-07-26 11:17:28.979075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.715 qpair failed and we were unable to recover it. 00:29:09.715 [2024-07-26 11:17:28.979559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-07-26 11:17:28.979590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.715 qpair failed and we were unable to recover it. 00:29:09.715 [2024-07-26 11:17:28.980073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-07-26 11:17:28.980106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.715 qpair failed and we were unable to recover it. 00:29:09.715 [2024-07-26 11:17:28.980661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-07-26 11:17:28.980692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.715 qpair failed and we were unable to recover it. 00:29:09.715 [2024-07-26 11:17:28.981176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-07-26 11:17:28.981207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.715 qpair failed and we were unable to recover it. 00:29:09.715 [2024-07-26 11:17:28.981695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-07-26 11:17:28.981725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.715 qpair failed and we were unable to recover it. 00:29:09.715 [2024-07-26 11:17:28.982213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-07-26 11:17:28.982245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.715 qpair failed and we were unable to recover it. 00:29:09.715 [2024-07-26 11:17:28.982744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-07-26 11:17:28.982774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.715 qpair failed and we were unable to recover it. 00:29:09.715 [2024-07-26 11:17:28.983282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-07-26 11:17:28.983314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.715 qpair failed and we were unable to recover it. 00:29:09.715 [2024-07-26 11:17:28.983877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-07-26 11:17:28.983908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.715 qpair failed and we were unable to recover it. 00:29:09.715 [2024-07-26 11:17:28.984457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-07-26 11:17:28.984488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.715 qpair failed and we were unable to recover it. 00:29:09.715 [2024-07-26 11:17:28.985054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-07-26 11:17:28.985085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.715 qpair failed and we were unable to recover it. 00:29:09.715 [2024-07-26 11:17:28.985541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-07-26 11:17:28.985571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.715 qpair failed and we were unable to recover it. 00:29:09.715 [2024-07-26 11:17:28.986108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-07-26 11:17:28.986139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.715 qpair failed and we were unable to recover it. 00:29:09.715 [2024-07-26 11:17:28.986696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-07-26 11:17:28.986727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.715 qpair failed and we were unable to recover it. 00:29:09.715 [2024-07-26 11:17:28.987242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-07-26 11:17:28.987273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.715 qpair failed and we were unable to recover it. 00:29:09.715 [2024-07-26 11:17:28.987830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-07-26 11:17:28.987864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.715 qpair failed and we were unable to recover it. 00:29:09.715 [2024-07-26 11:17:28.988451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-07-26 11:17:28.988483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.715 qpair failed and we were unable to recover it. 00:29:09.715 [2024-07-26 11:17:28.989020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-07-26 11:17:28.989058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.715 qpair failed and we were unable to recover it. 00:29:09.715 [2024-07-26 11:17:28.989551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-07-26 11:17:28.989581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.715 qpair failed and we were unable to recover it. 00:29:09.715 [2024-07-26 11:17:28.990055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-07-26 11:17:28.990087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.715 qpair failed and we were unable to recover it. 00:29:09.715 [2024-07-26 11:17:28.990602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-07-26 11:17:28.990634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.715 qpair failed and we were unable to recover it. 00:29:09.715 [2024-07-26 11:17:28.991136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-07-26 11:17:28.991167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.715 qpair failed and we were unable to recover it. 00:29:09.715 [2024-07-26 11:17:28.991730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-07-26 11:17:28.991760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.715 qpair failed and we were unable to recover it. 00:29:09.715 [2024-07-26 11:17:28.992320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-07-26 11:17:28.992352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.715 qpair failed and we were unable to recover it. 00:29:09.715 [2024-07-26 11:17:28.992915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.715 [2024-07-26 11:17:28.992945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.716 qpair failed and we were unable to recover it. 00:29:09.716 [2024-07-26 11:17:28.993500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.716 [2024-07-26 11:17:28.993531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.716 qpair failed and we were unable to recover it. 00:29:09.716 [2024-07-26 11:17:28.994036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.716 [2024-07-26 11:17:28.994075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.716 qpair failed and we were unable to recover it. 00:29:09.716 [2024-07-26 11:17:28.994564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.716 [2024-07-26 11:17:28.994595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.716 qpair failed and we were unable to recover it. 00:29:09.716 [2024-07-26 11:17:28.995138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.716 [2024-07-26 11:17:28.995169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.716 qpair failed and we were unable to recover it. 00:29:09.716 [2024-07-26 11:17:28.995709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.716 [2024-07-26 11:17:28.995739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.716 qpair failed and we were unable to recover it. 00:29:09.716 [2024-07-26 11:17:28.996230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.716 [2024-07-26 11:17:28.996262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.716 qpair failed and we were unable to recover it. 00:29:09.716 [2024-07-26 11:17:28.996708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.716 [2024-07-26 11:17:28.996743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.716 qpair failed and we were unable to recover it. 00:29:09.716 [2024-07-26 11:17:28.997233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.716 [2024-07-26 11:17:28.997265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.716 qpair failed and we were unable to recover it. 00:29:09.716 [2024-07-26 11:17:28.997699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.716 [2024-07-26 11:17:28.997729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.716 qpair failed and we were unable to recover it. 00:29:09.716 [2024-07-26 11:17:28.998169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.716 [2024-07-26 11:17:28.998184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.716 qpair failed and we were unable to recover it. 00:29:09.716 [2024-07-26 11:17:28.998640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.716 [2024-07-26 11:17:28.998670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.716 qpair failed and we were unable to recover it. 00:29:09.716 [2024-07-26 11:17:28.999250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.716 [2024-07-26 11:17:28.999281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.716 qpair failed and we were unable to recover it. 00:29:09.716 [2024-07-26 11:17:28.999811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.716 [2024-07-26 11:17:28.999825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.716 qpair failed and we were unable to recover it. 00:29:09.716 [2024-07-26 11:17:29.000302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.716 [2024-07-26 11:17:29.000317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.716 qpair failed and we were unable to recover it. 00:29:09.716 [2024-07-26 11:17:29.000631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.716 [2024-07-26 11:17:29.000645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.716 qpair failed and we were unable to recover it. 00:29:09.716 [2024-07-26 11:17:29.001021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.716 [2024-07-26 11:17:29.001036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.716 qpair failed and we were unable to recover it. 00:29:09.716 [2024-07-26 11:17:29.001479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.716 [2024-07-26 11:17:29.001495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.716 qpair failed and we were unable to recover it. 00:29:09.716 [2024-07-26 11:17:29.002316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.716 [2024-07-26 11:17:29.002350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.716 qpair failed and we were unable to recover it. 00:29:09.716 [2024-07-26 11:17:29.002856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.716 [2024-07-26 11:17:29.002887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.716 qpair failed and we were unable to recover it. 00:29:09.716 [2024-07-26 11:17:29.003372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.716 [2024-07-26 11:17:29.003388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.716 qpair failed and we were unable to recover it. 00:29:09.716 [2024-07-26 11:17:29.003714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.716 [2024-07-26 11:17:29.003729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.716 qpair failed and we were unable to recover it. 00:29:09.716 [2024-07-26 11:17:29.004233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.716 [2024-07-26 11:17:29.004265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.716 qpair failed and we were unable to recover it. 00:29:09.716 [2024-07-26 11:17:29.004735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.716 [2024-07-26 11:17:29.004750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.716 qpair failed and we were unable to recover it. 00:29:09.716 [2024-07-26 11:17:29.005252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.716 [2024-07-26 11:17:29.005267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.716 qpair failed and we were unable to recover it. 00:29:09.716 [2024-07-26 11:17:29.005713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.716 [2024-07-26 11:17:29.005744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.716 qpair failed and we were unable to recover it. 00:29:09.716 [2024-07-26 11:17:29.006227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.716 [2024-07-26 11:17:29.006260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.716 qpair failed and we were unable to recover it. 00:29:09.716 [2024-07-26 11:17:29.006797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.716 [2024-07-26 11:17:29.006812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.716 qpair failed and we were unable to recover it. 00:29:09.716 [2024-07-26 11:17:29.007207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.716 [2024-07-26 11:17:29.007221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.716 qpair failed and we were unable to recover it. 00:29:09.716 [2024-07-26 11:17:29.007906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.716 [2024-07-26 11:17:29.007927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.716 qpair failed and we were unable to recover it. 00:29:09.716 [2024-07-26 11:17:29.008432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.717 [2024-07-26 11:17:29.008449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.717 qpair failed and we were unable to recover it. 00:29:09.717 [2024-07-26 11:17:29.008899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.717 [2024-07-26 11:17:29.008914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.717 qpair failed and we were unable to recover it. 00:29:09.717 [2024-07-26 11:17:29.009683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.717 [2024-07-26 11:17:29.009700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.717 qpair failed and we were unable to recover it. 00:29:09.717 [2024-07-26 11:17:29.010170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.717 [2024-07-26 11:17:29.010185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.717 qpair failed and we were unable to recover it. 00:29:09.717 [2024-07-26 11:17:29.010644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.717 [2024-07-26 11:17:29.010659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.717 qpair failed and we were unable to recover it. 00:29:09.717 [2024-07-26 11:17:29.011125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.717 [2024-07-26 11:17:29.011140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.717 qpair failed and we were unable to recover it. 00:29:09.717 [2024-07-26 11:17:29.011552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.717 [2024-07-26 11:17:29.011582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.717 qpair failed and we were unable to recover it. 00:29:09.717 [2024-07-26 11:17:29.012054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.717 [2024-07-26 11:17:29.012069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.717 qpair failed and we were unable to recover it. 00:29:09.717 [2024-07-26 11:17:29.012595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.717 [2024-07-26 11:17:29.012610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.717 qpair failed and we were unable to recover it. 00:29:09.717 [2024-07-26 11:17:29.013131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.717 [2024-07-26 11:17:29.013146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.717 qpair failed and we were unable to recover it. 00:29:09.717 [2024-07-26 11:17:29.013596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.717 [2024-07-26 11:17:29.013611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.717 qpair failed and we were unable to recover it. 00:29:09.717 [2024-07-26 11:17:29.014095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.717 [2024-07-26 11:17:29.014126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.717 qpair failed and we were unable to recover it. 00:29:09.717 [2024-07-26 11:17:29.014699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.717 [2024-07-26 11:17:29.014731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.717 qpair failed and we were unable to recover it. 00:29:09.717 [2024-07-26 11:17:29.015257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.717 [2024-07-26 11:17:29.015272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.717 qpair failed and we were unable to recover it. 00:29:09.717 [2024-07-26 11:17:29.015659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.717 [2024-07-26 11:17:29.015674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.717 qpair failed and we were unable to recover it. 00:29:09.717 [2024-07-26 11:17:29.016207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.717 [2024-07-26 11:17:29.016230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.717 qpair failed and we were unable to recover it. 00:29:09.717 [2024-07-26 11:17:29.016667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.717 [2024-07-26 11:17:29.016698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.717 qpair failed and we were unable to recover it. 00:29:09.717 [2024-07-26 11:17:29.017250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.717 [2024-07-26 11:17:29.017268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.717 qpair failed and we were unable to recover it. 00:29:09.717 [2024-07-26 11:17:29.017731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.717 [2024-07-26 11:17:29.017746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.717 qpair failed and we were unable to recover it. 00:29:09.717 [2024-07-26 11:17:29.018196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.717 [2024-07-26 11:17:29.018211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.717 qpair failed and we were unable to recover it. 00:29:09.717 [2024-07-26 11:17:29.018661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.717 [2024-07-26 11:17:29.018691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.717 qpair failed and we were unable to recover it. 00:29:09.717 [2024-07-26 11:17:29.019116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.717 [2024-07-26 11:17:29.019148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.717 qpair failed and we were unable to recover it. 00:29:09.717 [2024-07-26 11:17:29.019704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.717 [2024-07-26 11:17:29.019735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.717 qpair failed and we were unable to recover it. 00:29:09.717 [2024-07-26 11:17:29.020216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.717 [2024-07-26 11:17:29.020247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.717 qpair failed and we were unable to recover it. 00:29:09.717 [2024-07-26 11:17:29.020721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.717 [2024-07-26 11:17:29.020735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.717 qpair failed and we were unable to recover it. 00:29:09.717 [2024-07-26 11:17:29.021205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.717 [2024-07-26 11:17:29.021220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.717 qpair failed and we were unable to recover it. 00:29:09.717 [2024-07-26 11:17:29.021734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.717 [2024-07-26 11:17:29.021748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.717 qpair failed and we were unable to recover it. 00:29:09.717 [2024-07-26 11:17:29.022196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.717 [2024-07-26 11:17:29.022227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.717 qpair failed and we were unable to recover it. 00:29:09.717 [2024-07-26 11:17:29.022713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.717 [2024-07-26 11:17:29.022755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.717 qpair failed and we were unable to recover it. 00:29:09.717 [2024-07-26 11:17:29.023157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.717 [2024-07-26 11:17:29.023188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.717 qpair failed and we were unable to recover it. 00:29:09.717 [2024-07-26 11:17:29.023654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.717 [2024-07-26 11:17:29.023685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.717 qpair failed and we were unable to recover it. 00:29:09.717 [2024-07-26 11:17:29.024184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.717 [2024-07-26 11:17:29.024216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.717 qpair failed and we were unable to recover it. 00:29:09.717 [2024-07-26 11:17:29.024692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.717 [2024-07-26 11:17:29.024707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.717 qpair failed and we were unable to recover it. 00:29:09.717 [2024-07-26 11:17:29.025210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.717 [2024-07-26 11:17:29.025242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.717 qpair failed and we were unable to recover it. 00:29:09.717 [2024-07-26 11:17:29.025721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.717 [2024-07-26 11:17:29.025751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.717 qpair failed and we were unable to recover it. 00:29:09.717 [2024-07-26 11:17:29.026227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.717 [2024-07-26 11:17:29.026242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.717 qpair failed and we were unable to recover it. 00:29:09.717 [2024-07-26 11:17:29.026780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.717 [2024-07-26 11:17:29.026811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.717 qpair failed and we were unable to recover it. 00:29:09.717 [2024-07-26 11:17:29.027237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.717 [2024-07-26 11:17:29.027269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.717 qpair failed and we were unable to recover it. 00:29:09.718 [2024-07-26 11:17:29.027744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.718 [2024-07-26 11:17:29.027787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.718 qpair failed and we were unable to recover it. 00:29:09.718 [2024-07-26 11:17:29.028239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.718 [2024-07-26 11:17:29.028254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.718 qpair failed and we were unable to recover it. 00:29:09.718 [2024-07-26 11:17:29.028701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.718 [2024-07-26 11:17:29.028732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.718 qpair failed and we were unable to recover it. 00:29:09.718 [2024-07-26 11:17:29.029227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.718 [2024-07-26 11:17:29.029259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.718 qpair failed and we were unable to recover it. 00:29:09.718 [2024-07-26 11:17:29.029817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.718 [2024-07-26 11:17:29.029847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.718 qpair failed and we were unable to recover it. 00:29:09.718 [2024-07-26 11:17:29.030325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.718 [2024-07-26 11:17:29.030340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.718 qpair failed and we were unable to recover it. 00:29:09.718 [2024-07-26 11:17:29.030665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.718 [2024-07-26 11:17:29.030681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.718 qpair failed and we were unable to recover it. 00:29:09.718 [2024-07-26 11:17:29.031206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.718 [2024-07-26 11:17:29.031237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.718 qpair failed and we were unable to recover it. 00:29:09.718 [2024-07-26 11:17:29.031768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.718 [2024-07-26 11:17:29.031799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.718 qpair failed and we were unable to recover it. 00:29:09.718 [2024-07-26 11:17:29.032274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.718 [2024-07-26 11:17:29.032289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.718 qpair failed and we were unable to recover it. 00:29:09.718 [2024-07-26 11:17:29.032788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.718 [2024-07-26 11:17:29.032803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.718 qpair failed and we were unable to recover it. 00:29:09.718 [2024-07-26 11:17:29.033313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.718 [2024-07-26 11:17:29.033345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.718 qpair failed and we were unable to recover it. 00:29:09.718 [2024-07-26 11:17:29.033902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.718 [2024-07-26 11:17:29.033933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.718 qpair failed and we were unable to recover it. 00:29:09.718 [2024-07-26 11:17:29.034410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.718 [2024-07-26 11:17:29.034441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.718 qpair failed and we were unable to recover it. 00:29:09.718 [2024-07-26 11:17:29.034871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.718 [2024-07-26 11:17:29.034901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.718 qpair failed and we were unable to recover it. 00:29:09.718 [2024-07-26 11:17:29.035451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.718 [2024-07-26 11:17:29.035484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.718 qpair failed and we were unable to recover it. 00:29:09.718 [2024-07-26 11:17:29.035833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.718 [2024-07-26 11:17:29.035876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.718 qpair failed and we were unable to recover it. 00:29:09.718 [2024-07-26 11:17:29.036320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.718 [2024-07-26 11:17:29.036335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.718 qpair failed and we were unable to recover it. 00:29:09.718 [2024-07-26 11:17:29.036809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.718 [2024-07-26 11:17:29.036839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.718 qpair failed and we were unable to recover it. 00:29:09.718 [2024-07-26 11:17:29.037370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.718 [2024-07-26 11:17:29.037417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.718 qpair failed and we were unable to recover it. 00:29:09.718 [2024-07-26 11:17:29.037853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.718 [2024-07-26 11:17:29.037884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.718 qpair failed and we were unable to recover it. 00:29:09.718 [2024-07-26 11:17:29.038363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.718 [2024-07-26 11:17:29.038394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.718 qpair failed and we were unable to recover it. 00:29:09.718 [2024-07-26 11:17:29.038824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.718 [2024-07-26 11:17:29.038855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.718 qpair failed and we were unable to recover it. 00:29:09.718 [2024-07-26 11:17:29.039364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.718 [2024-07-26 11:17:29.039396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.718 qpair failed and we were unable to recover it. 00:29:09.718 [2024-07-26 11:17:29.039888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.718 [2024-07-26 11:17:29.039926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.718 qpair failed and we were unable to recover it. 00:29:09.718 [2024-07-26 11:17:29.040320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.718 [2024-07-26 11:17:29.040335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.718 qpair failed and we were unable to recover it. 00:29:09.718 [2024-07-26 11:17:29.040782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.718 [2024-07-26 11:17:29.040813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.718 qpair failed and we were unable to recover it. 00:29:09.718 [2024-07-26 11:17:29.041288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.718 [2024-07-26 11:17:29.041319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.718 qpair failed and we were unable to recover it. 00:29:09.718 [2024-07-26 11:17:29.041796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.718 [2024-07-26 11:17:29.041827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.718 qpair failed and we were unable to recover it. 00:29:09.718 [2024-07-26 11:17:29.042256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.718 [2024-07-26 11:17:29.042287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.718 qpair failed and we were unable to recover it. 00:29:09.718 [2024-07-26 11:17:29.042841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.718 [2024-07-26 11:17:29.042871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.718 qpair failed and we were unable to recover it. 00:29:09.718 [2024-07-26 11:17:29.043372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.718 [2024-07-26 11:17:29.043404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.718 qpair failed and we were unable to recover it. 00:29:09.718 [2024-07-26 11:17:29.043875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.718 [2024-07-26 11:17:29.043919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.718 qpair failed and we were unable to recover it. 00:29:09.718 [2024-07-26 11:17:29.044452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.718 [2024-07-26 11:17:29.044484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.718 qpair failed and we were unable to recover it. 00:29:09.718 [2024-07-26 11:17:29.044969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.718 [2024-07-26 11:17:29.044999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.718 qpair failed and we were unable to recover it. 00:29:09.718 [2024-07-26 11:17:29.045509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.718 [2024-07-26 11:17:29.045541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.718 qpair failed and we were unable to recover it. 00:29:09.718 [2024-07-26 11:17:29.046102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.718 [2024-07-26 11:17:29.046134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.718 qpair failed and we were unable to recover it. 00:29:09.718 [2024-07-26 11:17:29.046608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.719 [2024-07-26 11:17:29.046638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.719 qpair failed and we were unable to recover it. 00:29:09.719 [2024-07-26 11:17:29.047228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.719 [2024-07-26 11:17:29.047259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.719 qpair failed and we were unable to recover it. 00:29:09.719 [2024-07-26 11:17:29.047792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.719 [2024-07-26 11:17:29.047823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.719 qpair failed and we were unable to recover it. 00:29:09.719 [2024-07-26 11:17:29.048352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.719 [2024-07-26 11:17:29.048383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.719 qpair failed and we were unable to recover it. 00:29:09.719 [2024-07-26 11:17:29.048864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.719 [2024-07-26 11:17:29.048894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.719 qpair failed and we were unable to recover it. 00:29:09.719 [2024-07-26 11:17:29.049365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.719 [2024-07-26 11:17:29.049396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.719 qpair failed and we were unable to recover it. 00:29:09.719 [2024-07-26 11:17:29.049881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.719 [2024-07-26 11:17:29.049911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.719 qpair failed and we were unable to recover it. 00:29:09.719 [2024-07-26 11:17:29.050462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.719 [2024-07-26 11:17:29.050478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.719 qpair failed and we were unable to recover it. 00:29:09.719 [2024-07-26 11:17:29.050913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.719 [2024-07-26 11:17:29.050943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.719 qpair failed and we were unable to recover it. 00:29:09.719 [2024-07-26 11:17:29.051450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.719 [2024-07-26 11:17:29.051481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.719 qpair failed and we were unable to recover it. 00:29:09.719 [2024-07-26 11:17:29.051976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.719 [2024-07-26 11:17:29.052007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.719 qpair failed and we were unable to recover it. 00:29:09.719 [2024-07-26 11:17:29.052486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.719 [2024-07-26 11:17:29.052517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.719 qpair failed and we were unable to recover it. 00:29:09.719 [2024-07-26 11:17:29.053018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.719 [2024-07-26 11:17:29.053057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.719 qpair failed and we were unable to recover it. 00:29:09.719 [2024-07-26 11:17:29.053598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.719 [2024-07-26 11:17:29.053629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.719 qpair failed and we were unable to recover it. 00:29:09.719 [2024-07-26 11:17:29.054037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.719 [2024-07-26 11:17:29.054076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.719 qpair failed and we were unable to recover it. 00:29:09.719 [2024-07-26 11:17:29.054567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.719 [2024-07-26 11:17:29.054598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.719 qpair failed and we were unable to recover it. 00:29:09.719 [2024-07-26 11:17:29.055146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.719 [2024-07-26 11:17:29.055178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.719 qpair failed and we were unable to recover it. 00:29:09.719 [2024-07-26 11:17:29.055686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.719 [2024-07-26 11:17:29.055717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.719 qpair failed and we were unable to recover it. 00:29:09.719 [2024-07-26 11:17:29.056252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.719 [2024-07-26 11:17:29.056284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.719 qpair failed and we were unable to recover it. 00:29:09.719 [2024-07-26 11:17:29.056825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.719 [2024-07-26 11:17:29.056855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.719 qpair failed and we were unable to recover it. 00:29:09.719 [2024-07-26 11:17:29.057410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.719 [2024-07-26 11:17:29.057442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.719 qpair failed and we were unable to recover it. 00:29:09.719 [2024-07-26 11:17:29.058025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.719 [2024-07-26 11:17:29.058067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.719 qpair failed and we were unable to recover it. 00:29:09.719 [2024-07-26 11:17:29.058627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.719 [2024-07-26 11:17:29.058664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.719 qpair failed and we were unable to recover it. 00:29:09.719 [2024-07-26 11:17:29.059148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.719 [2024-07-26 11:17:29.059181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.719 qpair failed and we were unable to recover it. 00:29:09.719 [2024-07-26 11:17:29.059680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.719 [2024-07-26 11:17:29.059710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.719 qpair failed and we were unable to recover it. 00:29:09.719 [2024-07-26 11:17:29.060269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.719 [2024-07-26 11:17:29.060300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.719 qpair failed and we were unable to recover it. 00:29:09.719 [2024-07-26 11:17:29.060860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.719 [2024-07-26 11:17:29.060891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.719 qpair failed and we were unable to recover it. 00:29:09.719 [2024-07-26 11:17:29.061401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.719 [2024-07-26 11:17:29.061432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.719 qpair failed and we were unable to recover it. 00:29:09.719 [2024-07-26 11:17:29.061916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.719 [2024-07-26 11:17:29.061947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.719 qpair failed and we were unable to recover it. 00:29:09.719 [2024-07-26 11:17:29.062534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.719 [2024-07-26 11:17:29.062565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.719 qpair failed and we were unable to recover it. 00:29:09.719 [2024-07-26 11:17:29.063095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.719 [2024-07-26 11:17:29.063127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.719 qpair failed and we were unable to recover it. 00:29:09.719 [2024-07-26 11:17:29.063614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.719 [2024-07-26 11:17:29.063645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.719 qpair failed and we were unable to recover it. 00:29:09.719 [2024-07-26 11:17:29.064174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.719 [2024-07-26 11:17:29.064205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.719 qpair failed and we were unable to recover it. 00:29:09.719 [2024-07-26 11:17:29.064701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.719 [2024-07-26 11:17:29.064732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.719 qpair failed and we were unable to recover it. 00:29:09.719 [2024-07-26 11:17:29.065265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.719 [2024-07-26 11:17:29.065297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.719 qpair failed and we were unable to recover it. 00:29:09.719 [2024-07-26 11:17:29.065845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.719 [2024-07-26 11:17:29.065876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.719 qpair failed and we were unable to recover it. 00:29:09.719 [2024-07-26 11:17:29.066419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.719 [2024-07-26 11:17:29.066450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.719 qpair failed and we were unable to recover it. 00:29:09.719 [2024-07-26 11:17:29.066988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.720 [2024-07-26 11:17:29.067019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.720 qpair failed and we were unable to recover it. 00:29:09.720 [2024-07-26 11:17:29.067323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.720 [2024-07-26 11:17:29.067353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.720 qpair failed and we were unable to recover it. 00:29:09.720 [2024-07-26 11:17:29.067849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.720 [2024-07-26 11:17:29.067880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.720 qpair failed and we were unable to recover it. 00:29:09.720 [2024-07-26 11:17:29.068436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.720 [2024-07-26 11:17:29.068467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.720 qpair failed and we were unable to recover it. 00:29:09.720 [2024-07-26 11:17:29.068951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.720 [2024-07-26 11:17:29.068982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.720 qpair failed and we were unable to recover it. 00:29:09.720 [2024-07-26 11:17:29.069526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.720 [2024-07-26 11:17:29.069541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.720 qpair failed and we were unable to recover it. 00:29:09.720 [2024-07-26 11:17:29.070030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.720 [2024-07-26 11:17:29.070067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.720 qpair failed and we were unable to recover it. 00:29:09.720 [2024-07-26 11:17:29.070554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.720 [2024-07-26 11:17:29.070585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.720 qpair failed and we were unable to recover it. 00:29:09.720 [2024-07-26 11:17:29.071139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.720 [2024-07-26 11:17:29.071170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.720 qpair failed and we were unable to recover it. 00:29:09.720 [2024-07-26 11:17:29.071748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.720 [2024-07-26 11:17:29.071779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.720 qpair failed and we were unable to recover it. 00:29:09.720 [2024-07-26 11:17:29.072333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.720 [2024-07-26 11:17:29.072364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.720 qpair failed and we were unable to recover it. 00:29:09.720 [2024-07-26 11:17:29.072901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.720 [2024-07-26 11:17:29.072931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.720 qpair failed and we were unable to recover it. 00:29:09.720 [2024-07-26 11:17:29.073470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.720 [2024-07-26 11:17:29.073502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.720 qpair failed and we were unable to recover it. 00:29:09.720 [2024-07-26 11:17:29.074060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.720 [2024-07-26 11:17:29.074091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.720 qpair failed and we were unable to recover it. 00:29:09.720 [2024-07-26 11:17:29.074348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.720 [2024-07-26 11:17:29.074379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.720 qpair failed and we were unable to recover it. 00:29:09.720 [2024-07-26 11:17:29.074911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.720 [2024-07-26 11:17:29.074941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.720 qpair failed and we were unable to recover it. 00:29:09.720 [2024-07-26 11:17:29.075502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.720 [2024-07-26 11:17:29.075534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.720 qpair failed and we were unable to recover it. 00:29:09.720 [2024-07-26 11:17:29.075939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.720 [2024-07-26 11:17:29.075967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.720 qpair failed and we were unable to recover it. 00:29:09.720 [2024-07-26 11:17:29.076458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.720 [2024-07-26 11:17:29.076490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.720 qpair failed and we were unable to recover it. 00:29:09.720 [2024-07-26 11:17:29.077021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.720 [2024-07-26 11:17:29.077068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.720 qpair failed and we were unable to recover it. 00:29:09.720 [2024-07-26 11:17:29.077611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.720 [2024-07-26 11:17:29.077641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.720 qpair failed and we were unable to recover it. 00:29:09.720 [2024-07-26 11:17:29.078199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.720 [2024-07-26 11:17:29.078230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.720 qpair failed and we were unable to recover it. 00:29:09.720 [2024-07-26 11:17:29.078744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.720 [2024-07-26 11:17:29.078775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.720 qpair failed and we were unable to recover it. 00:29:09.720 [2024-07-26 11:17:29.079203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.720 [2024-07-26 11:17:29.079218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.720 qpair failed and we were unable to recover it. 00:29:09.720 [2024-07-26 11:17:29.079743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.720 [2024-07-26 11:17:29.079774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.720 qpair failed and we were unable to recover it. 00:29:09.720 [2024-07-26 11:17:29.080200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.720 [2024-07-26 11:17:29.080217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.720 qpair failed and we were unable to recover it. 00:29:09.720 [2024-07-26 11:17:29.080741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.720 [2024-07-26 11:17:29.080755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.720 qpair failed and we were unable to recover it. 00:29:09.720 [2024-07-26 11:17:29.081160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.720 [2024-07-26 11:17:29.081193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.720 qpair failed and we were unable to recover it. 00:29:09.720 [2024-07-26 11:17:29.081752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.720 [2024-07-26 11:17:29.081783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.720 qpair failed and we were unable to recover it. 00:29:09.720 [2024-07-26 11:17:29.082261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.720 [2024-07-26 11:17:29.082293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.720 qpair failed and we were unable to recover it. 00:29:09.720 [2024-07-26 11:17:29.082791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.720 [2024-07-26 11:17:29.082822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.720 qpair failed and we were unable to recover it. 00:29:09.720 [2024-07-26 11:17:29.083409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.720 [2024-07-26 11:17:29.083440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.720 qpair failed and we were unable to recover it. 00:29:09.721 [2024-07-26 11:17:29.083919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.721 [2024-07-26 11:17:29.083949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.721 qpair failed and we were unable to recover it. 00:29:09.721 [2024-07-26 11:17:29.084382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.721 [2024-07-26 11:17:29.084414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.721 qpair failed and we were unable to recover it. 00:29:09.721 [2024-07-26 11:17:29.084953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.721 [2024-07-26 11:17:29.084984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.721 qpair failed and we were unable to recover it. 00:29:09.721 [2024-07-26 11:17:29.085520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.721 [2024-07-26 11:17:29.085552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.721 qpair failed and we were unable to recover it. 00:29:09.721 [2024-07-26 11:17:29.085971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.721 [2024-07-26 11:17:29.086001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.721 qpair failed and we were unable to recover it. 00:29:09.721 [2024-07-26 11:17:29.086513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.721 [2024-07-26 11:17:29.086544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.721 qpair failed and we were unable to recover it. 00:29:09.721 [2024-07-26 11:17:29.087077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.721 [2024-07-26 11:17:29.087109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.721 qpair failed and we were unable to recover it. 00:29:09.721 [2024-07-26 11:17:29.087614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.721 [2024-07-26 11:17:29.087652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.721 qpair failed and we were unable to recover it. 00:29:09.721 [2024-07-26 11:17:29.088200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.721 [2024-07-26 11:17:29.088232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.721 qpair failed and we were unable to recover it. 00:29:09.721 [2024-07-26 11:17:29.088766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.721 [2024-07-26 11:17:29.088797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.721 qpair failed and we were unable to recover it. 00:29:09.721 [2024-07-26 11:17:29.089305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.721 [2024-07-26 11:17:29.089320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.721 qpair failed and we were unable to recover it. 00:29:09.721 [2024-07-26 11:17:29.089777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.721 [2024-07-26 11:17:29.089792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.721 qpair failed and we were unable to recover it. 00:29:09.721 [2024-07-26 11:17:29.090181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.721 [2024-07-26 11:17:29.090213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.721 qpair failed and we were unable to recover it. 00:29:09.721 [2024-07-26 11:17:29.090771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.721 [2024-07-26 11:17:29.090803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.721 qpair failed and we were unable to recover it. 00:29:09.721 [2024-07-26 11:17:29.091372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.721 [2024-07-26 11:17:29.091403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.721 qpair failed and we were unable to recover it. 00:29:09.721 [2024-07-26 11:17:29.091902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.721 [2024-07-26 11:17:29.091933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.721 qpair failed and we were unable to recover it. 00:29:09.721 [2024-07-26 11:17:29.092418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.721 [2024-07-26 11:17:29.092450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.721 qpair failed and we were unable to recover it. 00:29:09.721 [2024-07-26 11:17:29.093030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.721 [2024-07-26 11:17:29.093069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.721 qpair failed and we were unable to recover it. 00:29:09.721 [2024-07-26 11:17:29.093545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.721 [2024-07-26 11:17:29.093575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.721 qpair failed and we were unable to recover it. 00:29:09.721 [2024-07-26 11:17:29.094058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.721 [2024-07-26 11:17:29.094099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.721 qpair failed and we were unable to recover it. 00:29:09.721 [2024-07-26 11:17:29.094630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.721 [2024-07-26 11:17:29.094661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.721 qpair failed and we were unable to recover it. 00:29:09.721 [2024-07-26 11:17:29.095096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.721 [2024-07-26 11:17:29.095125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.721 qpair failed and we were unable to recover it. 00:29:09.721 [2024-07-26 11:17:29.095691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.721 [2024-07-26 11:17:29.095722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.721 qpair failed and we were unable to recover it. 00:29:09.721 [2024-07-26 11:17:29.096275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.721 [2024-07-26 11:17:29.096290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.721 qpair failed and we were unable to recover it. 00:29:09.721 [2024-07-26 11:17:29.096812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.721 [2024-07-26 11:17:29.096843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.721 qpair failed and we were unable to recover it. 00:29:09.721 [2024-07-26 11:17:29.097321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.721 [2024-07-26 11:17:29.097355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.721 qpair failed and we were unable to recover it. 00:29:09.721 [2024-07-26 11:17:29.097929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.721 [2024-07-26 11:17:29.097959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.721 qpair failed and we were unable to recover it. 00:29:09.721 [2024-07-26 11:17:29.098517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.721 [2024-07-26 11:17:29.098548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.721 qpair failed and we were unable to recover it. 00:29:09.721 [2024-07-26 11:17:29.099109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.721 [2024-07-26 11:17:29.099141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.721 qpair failed and we were unable to recover it. 00:29:09.721 [2024-07-26 11:17:29.099693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.721 [2024-07-26 11:17:29.099724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.721 qpair failed and we were unable to recover it. 00:29:09.721 [2024-07-26 11:17:29.100257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.721 [2024-07-26 11:17:29.100289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.721 qpair failed and we were unable to recover it. 00:29:09.721 [2024-07-26 11:17:29.100827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.721 [2024-07-26 11:17:29.100858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.721 qpair failed and we were unable to recover it. 00:29:09.721 [2024-07-26 11:17:29.101369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.721 [2024-07-26 11:17:29.101401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.721 qpair failed and we were unable to recover it. 00:29:09.721 [2024-07-26 11:17:29.101957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.721 [2024-07-26 11:17:29.101993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.721 qpair failed and we were unable to recover it. 00:29:09.721 [2024-07-26 11:17:29.102558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.721 [2024-07-26 11:17:29.102589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.721 qpair failed and we were unable to recover it. 00:29:09.721 [2024-07-26 11:17:29.103147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.721 [2024-07-26 11:17:29.103178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.721 qpair failed and we were unable to recover it. 00:29:09.721 [2024-07-26 11:17:29.103746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.721 [2024-07-26 11:17:29.103776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.721 qpair failed and we were unable to recover it. 00:29:09.721 [2024-07-26 11:17:29.104125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.721 [2024-07-26 11:17:29.104158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.722 qpair failed and we were unable to recover it. 00:29:09.722 [2024-07-26 11:17:29.104715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.722 [2024-07-26 11:17:29.104746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.722 qpair failed and we were unable to recover it. 00:29:09.722 [2024-07-26 11:17:29.105272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.722 [2024-07-26 11:17:29.105303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.722 qpair failed and we were unable to recover it. 00:29:09.722 [2024-07-26 11:17:29.105785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.722 [2024-07-26 11:17:29.105816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.722 qpair failed and we were unable to recover it. 00:29:09.722 [2024-07-26 11:17:29.106371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.722 [2024-07-26 11:17:29.106402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.722 qpair failed and we were unable to recover it. 00:29:09.722 [2024-07-26 11:17:29.106901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.722 [2024-07-26 11:17:29.106932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.722 qpair failed and we were unable to recover it. 00:29:09.722 [2024-07-26 11:17:29.107426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.722 [2024-07-26 11:17:29.107441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.722 qpair failed and we were unable to recover it. 00:29:09.722 [2024-07-26 11:17:29.107944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.722 [2024-07-26 11:17:29.107974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.722 qpair failed and we were unable to recover it. 00:29:09.722 [2024-07-26 11:17:29.108509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.722 [2024-07-26 11:17:29.108541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.722 qpair failed and we were unable to recover it. 00:29:09.722 [2024-07-26 11:17:29.109102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.722 [2024-07-26 11:17:29.109133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.722 qpair failed and we were unable to recover it. 00:29:09.722 [2024-07-26 11:17:29.109612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.722 [2024-07-26 11:17:29.109643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.722 qpair failed and we were unable to recover it. 00:29:09.722 [2024-07-26 11:17:29.110123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.722 [2024-07-26 11:17:29.110154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.722 qpair failed and we were unable to recover it. 00:29:09.722 [2024-07-26 11:17:29.110728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.722 [2024-07-26 11:17:29.110759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.722 qpair failed and we were unable to recover it. 00:29:09.722 [2024-07-26 11:17:29.111302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.722 [2024-07-26 11:17:29.111333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.722 qpair failed and we were unable to recover it. 00:29:09.722 [2024-07-26 11:17:29.111868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.722 [2024-07-26 11:17:29.111898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.722 qpair failed and we were unable to recover it. 00:29:09.722 [2024-07-26 11:17:29.112466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.722 [2024-07-26 11:17:29.112497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.722 qpair failed and we were unable to recover it. 00:29:09.722 [2024-07-26 11:17:29.113077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.722 [2024-07-26 11:17:29.113108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.722 qpair failed and we were unable to recover it. 00:29:09.722 [2024-07-26 11:17:29.113620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.722 [2024-07-26 11:17:29.113651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.722 qpair failed and we were unable to recover it. 00:29:09.722 [2024-07-26 11:17:29.114207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.722 [2024-07-26 11:17:29.114238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.722 qpair failed and we were unable to recover it. 00:29:09.722 [2024-07-26 11:17:29.114722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.722 [2024-07-26 11:17:29.114752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.722 qpair failed and we were unable to recover it. 00:29:09.722 [2024-07-26 11:17:29.115300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.722 [2024-07-26 11:17:29.115331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.722 qpair failed and we were unable to recover it. 00:29:09.722 [2024-07-26 11:17:29.115890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.722 [2024-07-26 11:17:29.115921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.722 qpair failed and we were unable to recover it. 00:29:09.722 [2024-07-26 11:17:29.116349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.722 [2024-07-26 11:17:29.116380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.722 qpair failed and we were unable to recover it. 00:29:09.722 [2024-07-26 11:17:29.116890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.722 [2024-07-26 11:17:29.116920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.722 qpair failed and we were unable to recover it. 00:29:09.722 [2024-07-26 11:17:29.117402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.722 [2024-07-26 11:17:29.117434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.722 qpair failed and we were unable to recover it. 00:29:09.722 [2024-07-26 11:17:29.117935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.722 [2024-07-26 11:17:29.117966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.722 qpair failed and we were unable to recover it. 00:29:09.722 [2024-07-26 11:17:29.118541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.722 [2024-07-26 11:17:29.118572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.722 qpair failed and we were unable to recover it. 00:29:09.722 [2024-07-26 11:17:29.119127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.722 [2024-07-26 11:17:29.119158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.722 qpair failed and we were unable to recover it. 00:29:09.722 [2024-07-26 11:17:29.119689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.722 [2024-07-26 11:17:29.119720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.722 qpair failed and we were unable to recover it. 00:29:09.722 [2024-07-26 11:17:29.120236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.722 [2024-07-26 11:17:29.120278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.722 qpair failed and we were unable to recover it. 00:29:09.722 [2024-07-26 11:17:29.120837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.722 [2024-07-26 11:17:29.120868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.722 qpair failed and we were unable to recover it. 00:29:09.722 [2024-07-26 11:17:29.121401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.722 [2024-07-26 11:17:29.121433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.722 qpair failed and we were unable to recover it. 00:29:09.722 [2024-07-26 11:17:29.121990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.722 [2024-07-26 11:17:29.122021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.722 qpair failed and we were unable to recover it. 00:29:09.722 [2024-07-26 11:17:29.122512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.722 [2024-07-26 11:17:29.122543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.722 qpair failed and we were unable to recover it. 00:29:09.722 [2024-07-26 11:17:29.123053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.722 [2024-07-26 11:17:29.123084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.722 qpair failed and we were unable to recover it. 00:29:09.722 [2024-07-26 11:17:29.123641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.722 [2024-07-26 11:17:29.123672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.722 qpair failed and we were unable to recover it. 00:29:09.722 [2024-07-26 11:17:29.124202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.722 [2024-07-26 11:17:29.124240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.722 qpair failed and we were unable to recover it. 00:29:09.722 [2024-07-26 11:17:29.124824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.722 [2024-07-26 11:17:29.124854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.722 qpair failed and we were unable to recover it. 00:29:09.723 [2024-07-26 11:17:29.125419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.723 [2024-07-26 11:17:29.125451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.723 qpair failed and we were unable to recover it. 00:29:09.723 [2024-07-26 11:17:29.125952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.723 [2024-07-26 11:17:29.125982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.723 qpair failed and we were unable to recover it. 00:29:09.723 [2024-07-26 11:17:29.126519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.723 [2024-07-26 11:17:29.126551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.723 qpair failed and we were unable to recover it. 00:29:09.723 [2024-07-26 11:17:29.126976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.723 [2024-07-26 11:17:29.127006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.723 qpair failed and we were unable to recover it. 00:29:09.723 [2024-07-26 11:17:29.127496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.723 [2024-07-26 11:17:29.127511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.723 qpair failed and we were unable to recover it. 00:29:09.723 [2024-07-26 11:17:29.128062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.723 [2024-07-26 11:17:29.128094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.723 qpair failed and we were unable to recover it. 00:29:09.723 [2024-07-26 11:17:29.128640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.723 [2024-07-26 11:17:29.128670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.723 qpair failed and we were unable to recover it. 00:29:09.723 [2024-07-26 11:17:29.129165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.723 [2024-07-26 11:17:29.129180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.723 qpair failed and we were unable to recover it. 00:29:09.723 [2024-07-26 11:17:29.129644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.723 [2024-07-26 11:17:29.129659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.723 qpair failed and we were unable to recover it. 00:29:09.723 [2024-07-26 11:17:29.130162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.723 [2024-07-26 11:17:29.130192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.723 qpair failed and we were unable to recover it. 00:29:09.723 [2024-07-26 11:17:29.130732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.723 [2024-07-26 11:17:29.130762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.723 qpair failed and we were unable to recover it. 00:29:09.723 [2024-07-26 11:17:29.131328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.723 [2024-07-26 11:17:29.131360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.723 qpair failed and we were unable to recover it. 00:29:09.723 [2024-07-26 11:17:29.131795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.723 [2024-07-26 11:17:29.131826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.723 qpair failed and we were unable to recover it. 00:29:09.723 [2024-07-26 11:17:29.132310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.723 [2024-07-26 11:17:29.132341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.723 qpair failed and we were unable to recover it. 00:29:09.723 [2024-07-26 11:17:29.132890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.723 [2024-07-26 11:17:29.132921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.723 qpair failed and we were unable to recover it. 00:29:09.723 [2024-07-26 11:17:29.133418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.723 [2024-07-26 11:17:29.133449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.723 qpair failed and we were unable to recover it. 00:29:09.723 [2024-07-26 11:17:29.133924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.723 [2024-07-26 11:17:29.133954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.723 qpair failed and we were unable to recover it. 00:29:09.723 [2024-07-26 11:17:29.134489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.723 [2024-07-26 11:17:29.134522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.723 qpair failed and we were unable to recover it. 00:29:09.723 [2024-07-26 11:17:29.135059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.723 [2024-07-26 11:17:29.135090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.723 qpair failed and we were unable to recover it. 00:29:09.723 [2024-07-26 11:17:29.135653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.723 [2024-07-26 11:17:29.135684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.723 qpair failed and we were unable to recover it. 00:29:09.723 [2024-07-26 11:17:29.136213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.723 [2024-07-26 11:17:29.136245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.723 qpair failed and we were unable to recover it. 00:29:09.723 [2024-07-26 11:17:29.136780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.723 [2024-07-26 11:17:29.136811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.723 qpair failed and we were unable to recover it. 00:29:09.723 [2024-07-26 11:17:29.137288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.723 [2024-07-26 11:17:29.137320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.723 qpair failed and we were unable to recover it. 00:29:09.723 [2024-07-26 11:17:29.137853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.723 [2024-07-26 11:17:29.137884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.723 qpair failed and we were unable to recover it. 00:29:09.723 [2024-07-26 11:17:29.138439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.723 [2024-07-26 11:17:29.138470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.723 qpair failed and we were unable to recover it. 00:29:09.723 [2024-07-26 11:17:29.138900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.723 [2024-07-26 11:17:29.138931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.723 qpair failed and we were unable to recover it. 00:29:09.723 [2024-07-26 11:17:29.139505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.723 [2024-07-26 11:17:29.139536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.723 qpair failed and we were unable to recover it. 00:29:09.723 [2024-07-26 11:17:29.140011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.723 [2024-07-26 11:17:29.140052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.723 qpair failed and we were unable to recover it. 00:29:09.723 [2024-07-26 11:17:29.140627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.723 [2024-07-26 11:17:29.140658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.723 qpair failed and we were unable to recover it. 00:29:09.723 [2024-07-26 11:17:29.141223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.723 [2024-07-26 11:17:29.141238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.723 qpair failed and we were unable to recover it. 00:29:09.723 [2024-07-26 11:17:29.141724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.723 [2024-07-26 11:17:29.141755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.723 qpair failed and we were unable to recover it. 00:29:09.723 [2024-07-26 11:17:29.142289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.723 [2024-07-26 11:17:29.142320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.723 qpair failed and we were unable to recover it. 00:29:09.723 [2024-07-26 11:17:29.142814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.723 [2024-07-26 11:17:29.142844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.723 qpair failed and we were unable to recover it. 00:29:09.723 [2024-07-26 11:17:29.143401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.723 [2024-07-26 11:17:29.143432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.723 qpair failed and we were unable to recover it. 00:29:09.723 [2024-07-26 11:17:29.143991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.723 [2024-07-26 11:17:29.144021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.723 qpair failed and we were unable to recover it. 00:29:09.723 [2024-07-26 11:17:29.144500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.723 [2024-07-26 11:17:29.144531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.723 qpair failed and we were unable to recover it. 00:29:09.723 [2024-07-26 11:17:29.145134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.723 [2024-07-26 11:17:29.145166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.723 qpair failed and we were unable to recover it. 00:29:09.723 [2024-07-26 11:17:29.145701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.723 [2024-07-26 11:17:29.145732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.724 qpair failed and we were unable to recover it. 00:29:09.724 [2024-07-26 11:17:29.146295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.724 [2024-07-26 11:17:29.146336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.724 qpair failed and we were unable to recover it. 00:29:09.724 [2024-07-26 11:17:29.146594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.724 [2024-07-26 11:17:29.146609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.724 qpair failed and we were unable to recover it. 00:29:09.724 [2024-07-26 11:17:29.147087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.724 [2024-07-26 11:17:29.147118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.724 qpair failed and we were unable to recover it. 00:29:09.724 [2024-07-26 11:17:29.147652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.724 [2024-07-26 11:17:29.147683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.724 qpair failed and we were unable to recover it. 00:29:09.724 [2024-07-26 11:17:29.148237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.724 [2024-07-26 11:17:29.148253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.724 qpair failed and we were unable to recover it. 00:29:09.724 [2024-07-26 11:17:29.148752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.724 [2024-07-26 11:17:29.148767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.724 qpair failed and we were unable to recover it. 00:29:09.724 [2024-07-26 11:17:29.149309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.724 [2024-07-26 11:17:29.149341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.724 qpair failed and we were unable to recover it. 00:29:09.724 [2024-07-26 11:17:29.149904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.724 [2024-07-26 11:17:29.149935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.724 qpair failed and we were unable to recover it. 00:29:09.724 [2024-07-26 11:17:29.150441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.724 [2024-07-26 11:17:29.150472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.724 qpair failed and we were unable to recover it. 00:29:09.724 [2024-07-26 11:17:29.150949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.724 [2024-07-26 11:17:29.150979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.724 qpair failed and we were unable to recover it. 00:29:09.724 [2024-07-26 11:17:29.151530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.724 [2024-07-26 11:17:29.151560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.724 qpair failed and we were unable to recover it. 00:29:09.724 [2024-07-26 11:17:29.152148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.724 [2024-07-26 11:17:29.152179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.724 qpair failed and we were unable to recover it. 00:29:09.724 [2024-07-26 11:17:29.152713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.724 [2024-07-26 11:17:29.152744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.724 qpair failed and we were unable to recover it. 00:29:09.724 [2024-07-26 11:17:29.153231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.724 [2024-07-26 11:17:29.153263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.724 qpair failed and we were unable to recover it. 00:29:09.724 [2024-07-26 11:17:29.153703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.724 [2024-07-26 11:17:29.153734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.724 qpair failed and we were unable to recover it. 00:29:09.724 [2024-07-26 11:17:29.154149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.724 [2024-07-26 11:17:29.154181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.724 qpair failed and we were unable to recover it. 00:29:09.724 [2024-07-26 11:17:29.154738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.724 [2024-07-26 11:17:29.154769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.724 qpair failed and we were unable to recover it. 00:29:09.724 [2024-07-26 11:17:29.155326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.724 [2024-07-26 11:17:29.155357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.724 qpair failed and we were unable to recover it. 00:29:09.724 [2024-07-26 11:17:29.155864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.724 [2024-07-26 11:17:29.155894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.724 qpair failed and we were unable to recover it. 00:29:09.724 [2024-07-26 11:17:29.156454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.724 [2024-07-26 11:17:29.156485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.724 qpair failed and we were unable to recover it. 00:29:09.724 [2024-07-26 11:17:29.157065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.724 [2024-07-26 11:17:29.157097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.724 qpair failed and we were unable to recover it. 00:29:09.724 [2024-07-26 11:17:29.157580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.724 [2024-07-26 11:17:29.157610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.724 qpair failed and we were unable to recover it. 00:29:09.724 [2024-07-26 11:17:29.158106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.724 [2024-07-26 11:17:29.158137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.724 qpair failed and we were unable to recover it. 00:29:09.724 [2024-07-26 11:17:29.158695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.724 [2024-07-26 11:17:29.158726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.724 qpair failed and we were unable to recover it. 00:29:09.724 [2024-07-26 11:17:29.159202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.724 [2024-07-26 11:17:29.159233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.724 qpair failed and we were unable to recover it. 00:29:09.724 [2024-07-26 11:17:29.159765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.724 [2024-07-26 11:17:29.159795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.724 qpair failed and we were unable to recover it. 00:29:09.724 [2024-07-26 11:17:29.160269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.724 [2024-07-26 11:17:29.160310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.724 qpair failed and we were unable to recover it. 00:29:09.724 [2024-07-26 11:17:29.160848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.724 [2024-07-26 11:17:29.160879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.724 qpair failed and we were unable to recover it. 00:29:09.724 [2024-07-26 11:17:29.161412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.724 [2024-07-26 11:17:29.161444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.724 qpair failed and we were unable to recover it. 00:29:09.724 [2024-07-26 11:17:29.161916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.724 [2024-07-26 11:17:29.161946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.724 qpair failed and we were unable to recover it. 00:29:09.724 [2024-07-26 11:17:29.162504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.724 [2024-07-26 11:17:29.162535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.724 qpair failed and we were unable to recover it. 00:29:09.724 [2024-07-26 11:17:29.163069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.724 [2024-07-26 11:17:29.163101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.724 qpair failed and we were unable to recover it. 00:29:09.724 [2024-07-26 11:17:29.163636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.724 [2024-07-26 11:17:29.163667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.724 qpair failed and we were unable to recover it. 00:29:09.724 [2024-07-26 11:17:29.164201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.724 [2024-07-26 11:17:29.164232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.724 qpair failed and we were unable to recover it. 00:29:09.724 [2024-07-26 11:17:29.164734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.724 [2024-07-26 11:17:29.164765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.724 qpair failed and we were unable to recover it. 00:29:09.724 [2024-07-26 11:17:29.165344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.724 [2024-07-26 11:17:29.165375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.724 qpair failed and we were unable to recover it. 00:29:09.724 [2024-07-26 11:17:29.165923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.724 [2024-07-26 11:17:29.165953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.725 qpair failed and we were unable to recover it. 00:29:09.725 [2024-07-26 11:17:29.166452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.725 [2024-07-26 11:17:29.166484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.725 qpair failed and we were unable to recover it. 00:29:09.725 [2024-07-26 11:17:29.166901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.725 [2024-07-26 11:17:29.166931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.725 qpair failed and we were unable to recover it. 00:29:09.725 [2024-07-26 11:17:29.167464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.725 [2024-07-26 11:17:29.167496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.725 qpair failed and we were unable to recover it. 00:29:09.725 [2024-07-26 11:17:29.168059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.725 [2024-07-26 11:17:29.168096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.725 qpair failed and we were unable to recover it. 00:29:09.725 [2024-07-26 11:17:29.168655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.725 [2024-07-26 11:17:29.168685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.725 qpair failed and we were unable to recover it. 00:29:09.725 [2024-07-26 11:17:29.169239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.725 [2024-07-26 11:17:29.169271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.725 qpair failed and we were unable to recover it. 00:29:09.725 [2024-07-26 11:17:29.169779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.725 [2024-07-26 11:17:29.169809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.725 qpair failed and we were unable to recover it. 00:29:09.725 [2024-07-26 11:17:29.170371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.725 [2024-07-26 11:17:29.170404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.725 qpair failed and we were unable to recover it. 00:29:09.725 [2024-07-26 11:17:29.170888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.725 [2024-07-26 11:17:29.170918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.725 qpair failed and we were unable to recover it. 00:29:09.725 [2024-07-26 11:17:29.171272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.725 [2024-07-26 11:17:29.171303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.725 qpair failed and we were unable to recover it. 00:29:09.725 [2024-07-26 11:17:29.171791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.725 [2024-07-26 11:17:29.171821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.725 qpair failed and we were unable to recover it. 00:29:09.725 [2024-07-26 11:17:29.172234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.725 [2024-07-26 11:17:29.172265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.725 qpair failed and we were unable to recover it. 00:29:09.725 [2024-07-26 11:17:29.172797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.725 [2024-07-26 11:17:29.172812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.725 qpair failed and we were unable to recover it. 00:29:09.725 [2024-07-26 11:17:29.173339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.725 [2024-07-26 11:17:29.173370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.725 qpair failed and we were unable to recover it. 00:29:09.725 [2024-07-26 11:17:29.173712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.725 [2024-07-26 11:17:29.173742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.725 qpair failed and we were unable to recover it. 00:29:09.725 [2024-07-26 11:17:29.174158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.725 [2024-07-26 11:17:29.174189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.725 qpair failed and we were unable to recover it. 00:29:09.725 [2024-07-26 11:17:29.174722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.725 [2024-07-26 11:17:29.174737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.725 qpair failed and we were unable to recover it. 00:29:09.725 [2024-07-26 11:17:29.175142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.725 [2024-07-26 11:17:29.175174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.725 qpair failed and we were unable to recover it. 00:29:09.725 [2024-07-26 11:17:29.175734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.725 [2024-07-26 11:17:29.175765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.725 qpair failed and we were unable to recover it. 00:29:09.725 [2024-07-26 11:17:29.176269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.725 [2024-07-26 11:17:29.176300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.725 qpair failed and we were unable to recover it. 00:29:09.725 [2024-07-26 11:17:29.176838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.725 [2024-07-26 11:17:29.176868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.725 qpair failed and we were unable to recover it. 00:29:09.725 [2024-07-26 11:17:29.177295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.725 [2024-07-26 11:17:29.177326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.725 qpair failed and we were unable to recover it. 00:29:09.725 [2024-07-26 11:17:29.177854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.725 [2024-07-26 11:17:29.177870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.725 qpair failed and we were unable to recover it. 00:29:09.725 [2024-07-26 11:17:29.178314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.725 [2024-07-26 11:17:29.178346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.725 qpair failed and we were unable to recover it. 00:29:09.725 [2024-07-26 11:17:29.178838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.725 [2024-07-26 11:17:29.178869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.725 qpair failed and we were unable to recover it. 00:29:09.725 [2024-07-26 11:17:29.179369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.725 [2024-07-26 11:17:29.179401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.725 qpair failed and we were unable to recover it. 00:29:09.725 [2024-07-26 11:17:29.179887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.725 [2024-07-26 11:17:29.179917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.725 qpair failed and we were unable to recover it. 00:29:09.725 [2024-07-26 11:17:29.180209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.725 [2024-07-26 11:17:29.180240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.725 qpair failed and we were unable to recover it. 00:29:09.725 [2024-07-26 11:17:29.180707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.725 [2024-07-26 11:17:29.180738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.725 qpair failed and we were unable to recover it. 00:29:09.725 [2024-07-26 11:17:29.181291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.725 [2024-07-26 11:17:29.181322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.725 qpair failed and we were unable to recover it. 00:29:09.725 [2024-07-26 11:17:29.181880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.725 [2024-07-26 11:17:29.181911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.725 qpair failed and we were unable to recover it. 00:29:09.725 [2024-07-26 11:17:29.182338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.726 [2024-07-26 11:17:29.182370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.726 qpair failed and we were unable to recover it. 00:29:09.726 [2024-07-26 11:17:29.182847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.726 [2024-07-26 11:17:29.182877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.726 qpair failed and we were unable to recover it. 00:29:09.726 [2024-07-26 11:17:29.183379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.726 [2024-07-26 11:17:29.183410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.726 qpair failed and we were unable to recover it. 00:29:09.726 [2024-07-26 11:17:29.183999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.726 [2024-07-26 11:17:29.184029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.726 qpair failed and we were unable to recover it. 00:29:09.726 [2024-07-26 11:17:29.184597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.726 [2024-07-26 11:17:29.184628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.726 qpair failed and we were unable to recover it. 00:29:09.726 [2024-07-26 11:17:29.184977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.726 [2024-07-26 11:17:29.185008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.726 qpair failed and we were unable to recover it. 00:29:09.726 [2024-07-26 11:17:29.185441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.726 [2024-07-26 11:17:29.185473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.726 qpair failed and we were unable to recover it. 00:29:09.726 [2024-07-26 11:17:29.185940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.726 [2024-07-26 11:17:29.185970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.726 qpair failed and we were unable to recover it. 00:29:09.726 [2024-07-26 11:17:29.186480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.726 [2024-07-26 11:17:29.186512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.726 qpair failed and we were unable to recover it. 00:29:09.726 [2024-07-26 11:17:29.187100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.726 [2024-07-26 11:17:29.187131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.726 qpair failed and we were unable to recover it. 00:29:09.726 [2024-07-26 11:17:29.187672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.726 [2024-07-26 11:17:29.187713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.726 qpair failed and we were unable to recover it. 00:29:09.726 [2024-07-26 11:17:29.188216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.726 [2024-07-26 11:17:29.188247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.726 qpair failed and we were unable to recover it. 00:29:09.726 [2024-07-26 11:17:29.188809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.726 [2024-07-26 11:17:29.188845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.726 qpair failed and we were unable to recover it. 00:29:09.726 [2024-07-26 11:17:29.189420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.726 [2024-07-26 11:17:29.189451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.726 qpair failed and we were unable to recover it. 00:29:09.726 [2024-07-26 11:17:29.189934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.726 [2024-07-26 11:17:29.189977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.726 qpair failed and we were unable to recover it. 00:29:09.726 [2024-07-26 11:17:29.190529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.726 [2024-07-26 11:17:29.190560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.726 qpair failed and we were unable to recover it. 00:29:09.726 [2024-07-26 11:17:29.191094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.726 [2024-07-26 11:17:29.191125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.726 qpair failed and we were unable to recover it. 00:29:09.726 [2024-07-26 11:17:29.191538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.726 [2024-07-26 11:17:29.191569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.726 qpair failed and we were unable to recover it. 00:29:09.726 [2024-07-26 11:17:29.192037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.726 [2024-07-26 11:17:29.192076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.726 qpair failed and we were unable to recover it. 00:29:09.726 [2024-07-26 11:17:29.192627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.726 [2024-07-26 11:17:29.192658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.726 qpair failed and we were unable to recover it. 00:29:09.726 [2024-07-26 11:17:29.193134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.726 [2024-07-26 11:17:29.193149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.726 qpair failed and we were unable to recover it. 00:29:09.726 [2024-07-26 11:17:29.193608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.726 [2024-07-26 11:17:29.193623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.726 qpair failed and we were unable to recover it. 00:29:09.726 [2024-07-26 11:17:29.194103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.726 [2024-07-26 11:17:29.194134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.726 qpair failed and we were unable to recover it. 00:29:09.726 [2024-07-26 11:17:29.194622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.726 [2024-07-26 11:17:29.194653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.726 qpair failed and we were unable to recover it. 00:29:09.726 [2024-07-26 11:17:29.195122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.726 [2024-07-26 11:17:29.195155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.726 qpair failed and we were unable to recover it. 00:29:09.726 [2024-07-26 11:17:29.195643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.726 [2024-07-26 11:17:29.195673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.726 qpair failed and we were unable to recover it. 00:29:09.726 [2024-07-26 11:17:29.196190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.726 [2024-07-26 11:17:29.196222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.726 qpair failed and we were unable to recover it. 00:29:09.726 [2024-07-26 11:17:29.196696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.726 [2024-07-26 11:17:29.196726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.726 qpair failed and we were unable to recover it. 00:29:09.726 [2024-07-26 11:17:29.197233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.726 [2024-07-26 11:17:29.197249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.726 qpair failed and we were unable to recover it. 00:29:09.726 [2024-07-26 11:17:29.197703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.726 [2024-07-26 11:17:29.197733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.726 qpair failed and we were unable to recover it. 00:29:09.726 [2024-07-26 11:17:29.198211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.726 [2024-07-26 11:17:29.198242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:09.726 qpair failed and we were unable to recover it. 00:29:10.002 [2024-07-26 11:17:29.199658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-07-26 11:17:29.199689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-07-26 11:17:29.200199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-07-26 11:17:29.200232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-07-26 11:17:29.200721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-07-26 11:17:29.200736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-07-26 11:17:29.201215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-07-26 11:17:29.201236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-07-26 11:17:29.201469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-07-26 11:17:29.201485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-07-26 11:17:29.201979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-07-26 11:17:29.201994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-07-26 11:17:29.202250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-07-26 11:17:29.202268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-07-26 11:17:29.202718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-07-26 11:17:29.202732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-07-26 11:17:29.203197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-07-26 11:17:29.203213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-07-26 11:17:29.203674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-07-26 11:17:29.203689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-07-26 11:17:29.204148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-07-26 11:17:29.204163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-07-26 11:17:29.204667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-07-26 11:17:29.204682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-07-26 11:17:29.205183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-07-26 11:17:29.205198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-07-26 11:17:29.205590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-07-26 11:17:29.205605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-07-26 11:17:29.206052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-07-26 11:17:29.206068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-07-26 11:17:29.206465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-07-26 11:17:29.206496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-07-26 11:17:29.207474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-07-26 11:17:29.207499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-07-26 11:17:29.207973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-07-26 11:17:29.207988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-07-26 11:17:29.208515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-07-26 11:17:29.208531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.002 [2024-07-26 11:17:29.208972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.002 [2024-07-26 11:17:29.208987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.002 qpair failed and we were unable to recover it. 00:29:10.003 [2024-07-26 11:17:29.209428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-07-26 11:17:29.209443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-07-26 11:17:29.209970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-07-26 11:17:29.209988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-07-26 11:17:29.210490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-07-26 11:17:29.210522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-07-26 11:17:29.211063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-07-26 11:17:29.211095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-07-26 11:17:29.211517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-07-26 11:17:29.211547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-07-26 11:17:29.211978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-07-26 11:17:29.212009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-07-26 11:17:29.212606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-07-26 11:17:29.212638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-07-26 11:17:29.213123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-07-26 11:17:29.213155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-07-26 11:17:29.213664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-07-26 11:17:29.213695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-07-26 11:17:29.214117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-07-26 11:17:29.214132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-07-26 11:17:29.214657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-07-26 11:17:29.214672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-07-26 11:17:29.215123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-07-26 11:17:29.215138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-07-26 11:17:29.215583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-07-26 11:17:29.215613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-07-26 11:17:29.215906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-07-26 11:17:29.215937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-07-26 11:17:29.216432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-07-26 11:17:29.216463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-07-26 11:17:29.217005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-07-26 11:17:29.217036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-07-26 11:17:29.217622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-07-26 11:17:29.217653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-07-26 11:17:29.218187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-07-26 11:17:29.218219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-07-26 11:17:29.218695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-07-26 11:17:29.218725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-07-26 11:17:29.219205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-07-26 11:17:29.219237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-07-26 11:17:29.219724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-07-26 11:17:29.219754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-07-26 11:17:29.220230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-07-26 11:17:29.220261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-07-26 11:17:29.220539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-07-26 11:17:29.220570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-07-26 11:17:29.221104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-07-26 11:17:29.221136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-07-26 11:17:29.221638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-07-26 11:17:29.221669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-07-26 11:17:29.222086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-07-26 11:17:29.222118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-07-26 11:17:29.222652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-07-26 11:17:29.222683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-07-26 11:17:29.223237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-07-26 11:17:29.223253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-07-26 11:17:29.223754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-07-26 11:17:29.223769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-07-26 11:17:29.224279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-07-26 11:17:29.224302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-07-26 11:17:29.224761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-07-26 11:17:29.224792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-07-26 11:17:29.225321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-07-26 11:17:29.225337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-07-26 11:17:29.225844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-07-26 11:17:29.225859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-07-26 11:17:29.226302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-07-26 11:17:29.226318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-07-26 11:17:29.226886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-07-26 11:17:29.226917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-07-26 11:17:29.227390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-07-26 11:17:29.227421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-07-26 11:17:29.227978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.003 [2024-07-26 11:17:29.228008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.003 qpair failed and we were unable to recover it. 00:29:10.003 [2024-07-26 11:17:29.228574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-07-26 11:17:29.228606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-07-26 11:17:29.229152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-07-26 11:17:29.229168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-07-26 11:17:29.229598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-07-26 11:17:29.229613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-07-26 11:17:29.230119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-07-26 11:17:29.230151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-07-26 11:17:29.230626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-07-26 11:17:29.230657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-07-26 11:17:29.231088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-07-26 11:17:29.231120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-07-26 11:17:29.231666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-07-26 11:17:29.231696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-07-26 11:17:29.232187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-07-26 11:17:29.232219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-07-26 11:17:29.232800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-07-26 11:17:29.232830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-07-26 11:17:29.233364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-07-26 11:17:29.233395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-07-26 11:17:29.233817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-07-26 11:17:29.233848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-07-26 11:17:29.234406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-07-26 11:17:29.234438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-07-26 11:17:29.234997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-07-26 11:17:29.235027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-07-26 11:17:29.235447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-07-26 11:17:29.235478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-07-26 11:17:29.236028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-07-26 11:17:29.236066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-07-26 11:17:29.236637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-07-26 11:17:29.236668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-07-26 11:17:29.237224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-07-26 11:17:29.237256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-07-26 11:17:29.237732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-07-26 11:17:29.237762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-07-26 11:17:29.238237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-07-26 11:17:29.238269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-07-26 11:17:29.238750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-07-26 11:17:29.238781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-07-26 11:17:29.239267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-07-26 11:17:29.239298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-07-26 11:17:29.239854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-07-26 11:17:29.239885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-07-26 11:17:29.240377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-07-26 11:17:29.240408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-07-26 11:17:29.240887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-07-26 11:17:29.240902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-07-26 11:17:29.241432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-07-26 11:17:29.241464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-07-26 11:17:29.242002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-07-26 11:17:29.242033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-07-26 11:17:29.242531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-07-26 11:17:29.242563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-07-26 11:17:29.243143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-07-26 11:17:29.243175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-07-26 11:17:29.243731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-07-26 11:17:29.243762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-07-26 11:17:29.244020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-07-26 11:17:29.244057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-07-26 11:17:29.244565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-07-26 11:17:29.244595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-07-26 11:17:29.244872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-07-26 11:17:29.244909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-07-26 11:17:29.245392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-07-26 11:17:29.245425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-07-26 11:17:29.245985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-07-26 11:17:29.246015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-07-26 11:17:29.246564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-07-26 11:17:29.246580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-07-26 11:17:29.247034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-07-26 11:17:29.247053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-07-26 11:17:29.247497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.004 [2024-07-26 11:17:29.247512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.004 qpair failed and we were unable to recover it. 00:29:10.004 [2024-07-26 11:17:29.247963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-07-26 11:17:29.247978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-07-26 11:17:29.248364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-07-26 11:17:29.248396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-07-26 11:17:29.248881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-07-26 11:17:29.248912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-07-26 11:17:29.249448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-07-26 11:17:29.249480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-07-26 11:17:29.250030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-07-26 11:17:29.250048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-07-26 11:17:29.250447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-07-26 11:17:29.250461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-07-26 11:17:29.250909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-07-26 11:17:29.250939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-07-26 11:17:29.251470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-07-26 11:17:29.251501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-07-26 11:17:29.252008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-07-26 11:17:29.252039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-07-26 11:17:29.252461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-07-26 11:17:29.252492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-07-26 11:17:29.252969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-07-26 11:17:29.253000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-07-26 11:17:29.253489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-07-26 11:17:29.253521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-07-26 11:17:29.254038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-07-26 11:17:29.254057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-07-26 11:17:29.254563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-07-26 11:17:29.254594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-07-26 11:17:29.255065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-07-26 11:17:29.255098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-07-26 11:17:29.255544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-07-26 11:17:29.255575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-07-26 11:17:29.256132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-07-26 11:17:29.256163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-07-26 11:17:29.256643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-07-26 11:17:29.256658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-07-26 11:17:29.257054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-07-26 11:17:29.257070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-07-26 11:17:29.257900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-07-26 11:17:29.257933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-07-26 11:17:29.258281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-07-26 11:17:29.258317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-07-26 11:17:29.258862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-07-26 11:17:29.258877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-07-26 11:17:29.259396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-07-26 11:17:29.259412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-07-26 11:17:29.259933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-07-26 11:17:29.259948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-07-26 11:17:29.260366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-07-26 11:17:29.260397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-07-26 11:17:29.260827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-07-26 11:17:29.260858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-07-26 11:17:29.261360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-07-26 11:17:29.261391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-07-26 11:17:29.261733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-07-26 11:17:29.261764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-07-26 11:17:29.262247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-07-26 11:17:29.262278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-07-26 11:17:29.262756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-07-26 11:17:29.262786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-07-26 11:17:29.263365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-07-26 11:17:29.263380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-07-26 11:17:29.263831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-07-26 11:17:29.263846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-07-26 11:17:29.264313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-07-26 11:17:29.264328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-07-26 11:17:29.264550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-07-26 11:17:29.264565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-07-26 11:17:29.264996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-07-26 11:17:29.265013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-07-26 11:17:29.265535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-07-26 11:17:29.265551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-07-26 11:17:29.266073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-07-26 11:17:29.266105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.005 [2024-07-26 11:17:29.266600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.005 [2024-07-26 11:17:29.266615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.005 qpair failed and we were unable to recover it. 00:29:10.006 [2024-07-26 11:17:29.267062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-07-26 11:17:29.267078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-07-26 11:17:29.267580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-07-26 11:17:29.267611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-07-26 11:17:29.268109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-07-26 11:17:29.268140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-07-26 11:17:29.268628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-07-26 11:17:29.268659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-07-26 11:17:29.269091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-07-26 11:17:29.269122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-07-26 11:17:29.269378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-07-26 11:17:29.269409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-07-26 11:17:29.269911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-07-26 11:17:29.269941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-07-26 11:17:29.270495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-07-26 11:17:29.270526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-07-26 11:17:29.271087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-07-26 11:17:29.271102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-07-26 11:17:29.271543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-07-26 11:17:29.271574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-07-26 11:17:29.272013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-07-26 11:17:29.272053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-07-26 11:17:29.272589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-07-26 11:17:29.272633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-07-26 11:17:29.273132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-07-26 11:17:29.273147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-07-26 11:17:29.273592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-07-26 11:17:29.273622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-07-26 11:17:29.274073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-07-26 11:17:29.274105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-07-26 11:17:29.274585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-07-26 11:17:29.274616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-07-26 11:17:29.275100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-07-26 11:17:29.275132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-07-26 11:17:29.275664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-07-26 11:17:29.275694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-07-26 11:17:29.276176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-07-26 11:17:29.276208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-07-26 11:17:29.276690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-07-26 11:17:29.276721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-07-26 11:17:29.277272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-07-26 11:17:29.277287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-07-26 11:17:29.277821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-07-26 11:17:29.277852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-07-26 11:17:29.278406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-07-26 11:17:29.278422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-07-26 11:17:29.278942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-07-26 11:17:29.278957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-07-26 11:17:29.279467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-07-26 11:17:29.279499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-07-26 11:17:29.279924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-07-26 11:17:29.279954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-07-26 11:17:29.280370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-07-26 11:17:29.280401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-07-26 11:17:29.280888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-07-26 11:17:29.280918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-07-26 11:17:29.281256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-07-26 11:17:29.281287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-07-26 11:17:29.281780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-07-26 11:17:29.281794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-07-26 11:17:29.282251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-07-26 11:17:29.282283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-07-26 11:17:29.282711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-07-26 11:17:29.282742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-07-26 11:17:29.283223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-07-26 11:17:29.283255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-07-26 11:17:29.283783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-07-26 11:17:29.283813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-07-26 11:17:29.284290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-07-26 11:17:29.284321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-07-26 11:17:29.284814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-07-26 11:17:29.284829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.006 [2024-07-26 11:17:29.285353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.006 [2024-07-26 11:17:29.285371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.006 qpair failed and we were unable to recover it. 00:29:10.007 [2024-07-26 11:17:29.285819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-07-26 11:17:29.285833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-07-26 11:17:29.286286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-07-26 11:17:29.286301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-07-26 11:17:29.286754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-07-26 11:17:29.286784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-07-26 11:17:29.287259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-07-26 11:17:29.287290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-07-26 11:17:29.287759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-07-26 11:17:29.287801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-07-26 11:17:29.288242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-07-26 11:17:29.288274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-07-26 11:17:29.288749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-07-26 11:17:29.288764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-07-26 11:17:29.289215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-07-26 11:17:29.289230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-07-26 11:17:29.289671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-07-26 11:17:29.289686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-07-26 11:17:29.290164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-07-26 11:17:29.290195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-07-26 11:17:29.290672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-07-26 11:17:29.290701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-07-26 11:17:29.291285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-07-26 11:17:29.291317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-07-26 11:17:29.291801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-07-26 11:17:29.291831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-07-26 11:17:29.292433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-07-26 11:17:29.292464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-07-26 11:17:29.293003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-07-26 11:17:29.293034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-07-26 11:17:29.293574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-07-26 11:17:29.293605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-07-26 11:17:29.294082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-07-26 11:17:29.294113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-07-26 11:17:29.294646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-07-26 11:17:29.294677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-07-26 11:17:29.295228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-07-26 11:17:29.295259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-07-26 11:17:29.295758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-07-26 11:17:29.295789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-07-26 11:17:29.296348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-07-26 11:17:29.296385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-07-26 11:17:29.296828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-07-26 11:17:29.296859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-07-26 11:17:29.297401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-07-26 11:17:29.297433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-07-26 11:17:29.297975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-07-26 11:17:29.298005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-07-26 11:17:29.298437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-07-26 11:17:29.298468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-07-26 11:17:29.298897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-07-26 11:17:29.298911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-07-26 11:17:29.299303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-07-26 11:17:29.299319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-07-26 11:17:29.299798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-07-26 11:17:29.299829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-07-26 11:17:29.300021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-07-26 11:17:29.300059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.007 [2024-07-26 11:17:29.300551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.007 [2024-07-26 11:17:29.300582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.007 qpair failed and we were unable to recover it. 00:29:10.008 [2024-07-26 11:17:29.301064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-07-26 11:17:29.301096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-07-26 11:17:29.301629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-07-26 11:17:29.301659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-07-26 11:17:29.302135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-07-26 11:17:29.302167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-07-26 11:17:29.302648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-07-26 11:17:29.302678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-07-26 11:17:29.303242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-07-26 11:17:29.303274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-07-26 11:17:29.303811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-07-26 11:17:29.303842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-07-26 11:17:29.304375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-07-26 11:17:29.304407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-07-26 11:17:29.304883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-07-26 11:17:29.304914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-07-26 11:17:29.305466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-07-26 11:17:29.305508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-07-26 11:17:29.306036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-07-26 11:17:29.306086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-07-26 11:17:29.306570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-07-26 11:17:29.306600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-07-26 11:17:29.307161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-07-26 11:17:29.307193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-07-26 11:17:29.307769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-07-26 11:17:29.307800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-07-26 11:17:29.308369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-07-26 11:17:29.308401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-07-26 11:17:29.308818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-07-26 11:17:29.308848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-07-26 11:17:29.309346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-07-26 11:17:29.309377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-07-26 11:17:29.309869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-07-26 11:17:29.309900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-07-26 11:17:29.310348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-07-26 11:17:29.310379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-07-26 11:17:29.310638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-07-26 11:17:29.310668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-07-26 11:17:29.311144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-07-26 11:17:29.311177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-07-26 11:17:29.311735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-07-26 11:17:29.311765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-07-26 11:17:29.312323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-07-26 11:17:29.312354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-07-26 11:17:29.312841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-07-26 11:17:29.312871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-07-26 11:17:29.313327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-07-26 11:17:29.313359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-07-26 11:17:29.313920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-07-26 11:17:29.313951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-07-26 11:17:29.314454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-07-26 11:17:29.314486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-07-26 11:17:29.315055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-07-26 11:17:29.315087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-07-26 11:17:29.315644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-07-26 11:17:29.315675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-07-26 11:17:29.316235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-07-26 11:17:29.316267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-07-26 11:17:29.316771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-07-26 11:17:29.316802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-07-26 11:17:29.317337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-07-26 11:17:29.317368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-07-26 11:17:29.317836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-07-26 11:17:29.317867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-07-26 11:17:29.318347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-07-26 11:17:29.318379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-07-26 11:17:29.318912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-07-26 11:17:29.318943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-07-26 11:17:29.319433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-07-26 11:17:29.319465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-07-26 11:17:29.319723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-07-26 11:17:29.319754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-07-26 11:17:29.320226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.008 [2024-07-26 11:17:29.320258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.008 qpair failed and we were unable to recover it. 00:29:10.008 [2024-07-26 11:17:29.320744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-07-26 11:17:29.320775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-07-26 11:17:29.321333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-07-26 11:17:29.321365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-07-26 11:17:29.321835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-07-26 11:17:29.321866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-07-26 11:17:29.322351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-07-26 11:17:29.322383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-07-26 11:17:29.322860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-07-26 11:17:29.322889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-07-26 11:17:29.323373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-07-26 11:17:29.323404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-07-26 11:17:29.323896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-07-26 11:17:29.323926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-07-26 11:17:29.324431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-07-26 11:17:29.324463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-07-26 11:17:29.324956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-07-26 11:17:29.324987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-07-26 11:17:29.325488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-07-26 11:17:29.325503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-07-26 11:17:29.325880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-07-26 11:17:29.325910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-07-26 11:17:29.326486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-07-26 11:17:29.326517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-07-26 11:17:29.327060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-07-26 11:17:29.327097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-07-26 11:17:29.327646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-07-26 11:17:29.327677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-07-26 11:17:29.327934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-07-26 11:17:29.327965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-07-26 11:17:29.328494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-07-26 11:17:29.328526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-07-26 11:17:29.329016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-07-26 11:17:29.329062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-07-26 11:17:29.329544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-07-26 11:17:29.329575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-07-26 11:17:29.330062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-07-26 11:17:29.330093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-07-26 11:17:29.330664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-07-26 11:17:29.330695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-07-26 11:17:29.331233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-07-26 11:17:29.331265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-07-26 11:17:29.331746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-07-26 11:17:29.331776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-07-26 11:17:29.332339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-07-26 11:17:29.332371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-07-26 11:17:29.332906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-07-26 11:17:29.332937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-07-26 11:17:29.333420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-07-26 11:17:29.333452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-07-26 11:17:29.334007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-07-26 11:17:29.334038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-07-26 11:17:29.334551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-07-26 11:17:29.334582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-07-26 11:17:29.335064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-07-26 11:17:29.335096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-07-26 11:17:29.335584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-07-26 11:17:29.335614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-07-26 11:17:29.336089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-07-26 11:17:29.336121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-07-26 11:17:29.336679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-07-26 11:17:29.336711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-07-26 11:17:29.337190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-07-26 11:17:29.337222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-07-26 11:17:29.337760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-07-26 11:17:29.337791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-07-26 11:17:29.338300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-07-26 11:17:29.338331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-07-26 11:17:29.338818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-07-26 11:17:29.338848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-07-26 11:17:29.339412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-07-26 11:17:29.339443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-07-26 11:17:29.339928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-07-26 11:17:29.339958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.009 [2024-07-26 11:17:29.340433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.009 [2024-07-26 11:17:29.340465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.009 qpair failed and we were unable to recover it. 00:29:10.010 [2024-07-26 11:17:29.341025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-07-26 11:17:29.341065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-07-26 11:17:29.341552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-07-26 11:17:29.341583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-07-26 11:17:29.342146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-07-26 11:17:29.342178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-07-26 11:17:29.342681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-07-26 11:17:29.342711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-07-26 11:17:29.343204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-07-26 11:17:29.343236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-07-26 11:17:29.343725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-07-26 11:17:29.343740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-07-26 11:17:29.344178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-07-26 11:17:29.344209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-07-26 11:17:29.344741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-07-26 11:17:29.344771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-07-26 11:17:29.345287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-07-26 11:17:29.345318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-07-26 11:17:29.345841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-07-26 11:17:29.345855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-07-26 11:17:29.346406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-07-26 11:17:29.346438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-07-26 11:17:29.346997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-07-26 11:17:29.347027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-07-26 11:17:29.347590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-07-26 11:17:29.347621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-07-26 11:17:29.348060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-07-26 11:17:29.348092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-07-26 11:17:29.348646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-07-26 11:17:29.348688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-07-26 11:17:29.349134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-07-26 11:17:29.349167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-07-26 11:17:29.349699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-07-26 11:17:29.349729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-07-26 11:17:29.350302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-07-26 11:17:29.350333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-07-26 11:17:29.350885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-07-26 11:17:29.350916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-07-26 11:17:29.351402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-07-26 11:17:29.351434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-07-26 11:17:29.352015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-07-26 11:17:29.352054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-07-26 11:17:29.352558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-07-26 11:17:29.352589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-07-26 11:17:29.353098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-07-26 11:17:29.353130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-07-26 11:17:29.353689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-07-26 11:17:29.353720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-07-26 11:17:29.354150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-07-26 11:17:29.354182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-07-26 11:17:29.354648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-07-26 11:17:29.354678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-07-26 11:17:29.355210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-07-26 11:17:29.355241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-07-26 11:17:29.355791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-07-26 11:17:29.355823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-07-26 11:17:29.356218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-07-26 11:17:29.356233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-07-26 11:17:29.356756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-07-26 11:17:29.356786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-07-26 11:17:29.357256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-07-26 11:17:29.357289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-07-26 11:17:29.357822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-07-26 11:17:29.357853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-07-26 11:17:29.358388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-07-26 11:17:29.358420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-07-26 11:17:29.358953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-07-26 11:17:29.358984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-07-26 11:17:29.359539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-07-26 11:17:29.359571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-07-26 11:17:29.360106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-07-26 11:17:29.360137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-07-26 11:17:29.360625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.010 [2024-07-26 11:17:29.360655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.010 qpair failed and we were unable to recover it. 00:29:10.010 [2024-07-26 11:17:29.361131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-07-26 11:17:29.361163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-07-26 11:17:29.361641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-07-26 11:17:29.361672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-07-26 11:17:29.362147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-07-26 11:17:29.362178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-07-26 11:17:29.362733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-07-26 11:17:29.362763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-07-26 11:17:29.362953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-07-26 11:17:29.362968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-07-26 11:17:29.363431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-07-26 11:17:29.363462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-07-26 11:17:29.364033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-07-26 11:17:29.364071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-07-26 11:17:29.364603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-07-26 11:17:29.364634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-07-26 11:17:29.365218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-07-26 11:17:29.365250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-07-26 11:17:29.365683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-07-26 11:17:29.365714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-07-26 11:17:29.366192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-07-26 11:17:29.366224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-07-26 11:17:29.366777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-07-26 11:17:29.366808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-07-26 11:17:29.367072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-07-26 11:17:29.367103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-07-26 11:17:29.367634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-07-26 11:17:29.367664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-07-26 11:17:29.368200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-07-26 11:17:29.368231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-07-26 11:17:29.368787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-07-26 11:17:29.368818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-07-26 11:17:29.369302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-07-26 11:17:29.369335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-07-26 11:17:29.369918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-07-26 11:17:29.369954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-07-26 11:17:29.370509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-07-26 11:17:29.370553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-07-26 11:17:29.371083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-07-26 11:17:29.371114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-07-26 11:17:29.371591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-07-26 11:17:29.371621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-07-26 11:17:29.372178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-07-26 11:17:29.372209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-07-26 11:17:29.372715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-07-26 11:17:29.372747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-07-26 11:17:29.373228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-07-26 11:17:29.373260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-07-26 11:17:29.373740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-07-26 11:17:29.373770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-07-26 11:17:29.374330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-07-26 11:17:29.374362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-07-26 11:17:29.374918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-07-26 11:17:29.374948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-07-26 11:17:29.375512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-07-26 11:17:29.375543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-07-26 11:17:29.376056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-07-26 11:17:29.376088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-07-26 11:17:29.376587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-07-26 11:17:29.376618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.011 [2024-07-26 11:17:29.377155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.011 [2024-07-26 11:17:29.377188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.011 qpair failed and we were unable to recover it. 00:29:10.012 [2024-07-26 11:17:29.377726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-07-26 11:17:29.377756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-07-26 11:17:29.378231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-07-26 11:17:29.378261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-07-26 11:17:29.378679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-07-26 11:17:29.378710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-07-26 11:17:29.379258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-07-26 11:17:29.379290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-07-26 11:17:29.379822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-07-26 11:17:29.379852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-07-26 11:17:29.380332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-07-26 11:17:29.380364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-07-26 11:17:29.380867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-07-26 11:17:29.380897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-07-26 11:17:29.381380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-07-26 11:17:29.381413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-07-26 11:17:29.381838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-07-26 11:17:29.381869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-07-26 11:17:29.382413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-07-26 11:17:29.382444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-07-26 11:17:29.382999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-07-26 11:17:29.383030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-07-26 11:17:29.383522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-07-26 11:17:29.383553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-07-26 11:17:29.384093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-07-26 11:17:29.384124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-07-26 11:17:29.384639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-07-26 11:17:29.384669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-07-26 11:17:29.385171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-07-26 11:17:29.385203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-07-26 11:17:29.385679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-07-26 11:17:29.385709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-07-26 11:17:29.386142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-07-26 11:17:29.386174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-07-26 11:17:29.386715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-07-26 11:17:29.386745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-07-26 11:17:29.387223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-07-26 11:17:29.387238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-07-26 11:17:29.387737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-07-26 11:17:29.387767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-07-26 11:17:29.388271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-07-26 11:17:29.388310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-07-26 11:17:29.388801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-07-26 11:17:29.388833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-07-26 11:17:29.389330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-07-26 11:17:29.389362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-07-26 11:17:29.389899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-07-26 11:17:29.389930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-07-26 11:17:29.390406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-07-26 11:17:29.390438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-07-26 11:17:29.390965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-07-26 11:17:29.390980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-07-26 11:17:29.391513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-07-26 11:17:29.391551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-07-26 11:17:29.392038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-07-26 11:17:29.392076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-07-26 11:17:29.392555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-07-26 11:17:29.392586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-07-26 11:17:29.393144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-07-26 11:17:29.393176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-07-26 11:17:29.393679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-07-26 11:17:29.393709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-07-26 11:17:29.394295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-07-26 11:17:29.394326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-07-26 11:17:29.394866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-07-26 11:17:29.394897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-07-26 11:17:29.395377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-07-26 11:17:29.395408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-07-26 11:17:29.395966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-07-26 11:17:29.395997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-07-26 11:17:29.396582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-07-26 11:17:29.396614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-07-26 11:17:29.397097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.012 [2024-07-26 11:17:29.397129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.012 qpair failed and we were unable to recover it. 00:29:10.012 [2024-07-26 11:17:29.397615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.013 [2024-07-26 11:17:29.397646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.013 qpair failed and we were unable to recover it. 00:29:10.013 [2024-07-26 11:17:29.397907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.013 [2024-07-26 11:17:29.397937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.013 qpair failed and we were unable to recover it. 00:29:10.013 [2024-07-26 11:17:29.398494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.013 [2024-07-26 11:17:29.398525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.013 qpair failed and we were unable to recover it. 00:29:10.013 [2024-07-26 11:17:29.399088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.013 [2024-07-26 11:17:29.399119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.013 qpair failed and we were unable to recover it. 00:29:10.013 [2024-07-26 11:17:29.399541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.013 [2024-07-26 11:17:29.399572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.013 qpair failed and we were unable to recover it. 00:29:10.013 [2024-07-26 11:17:29.400122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.013 [2024-07-26 11:17:29.400153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.013 qpair failed and we were unable to recover it. 00:29:10.013 [2024-07-26 11:17:29.400630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.013 [2024-07-26 11:17:29.400661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.013 qpair failed and we were unable to recover it. 00:29:10.013 [2024-07-26 11:17:29.401165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.013 [2024-07-26 11:17:29.401196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.013 qpair failed and we were unable to recover it. 00:29:10.013 [2024-07-26 11:17:29.401751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.013 [2024-07-26 11:17:29.401781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.013 qpair failed and we were unable to recover it. 00:29:10.013 [2024-07-26 11:17:29.402206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.013 [2024-07-26 11:17:29.402239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.013 qpair failed and we were unable to recover it. 00:29:10.013 [2024-07-26 11:17:29.402674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.013 [2024-07-26 11:17:29.402704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.013 qpair failed and we were unable to recover it. 00:29:10.013 [2024-07-26 11:17:29.403210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.013 [2024-07-26 11:17:29.403242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.013 qpair failed and we were unable to recover it. 00:29:10.013 [2024-07-26 11:17:29.403797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.013 [2024-07-26 11:17:29.403828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.013 qpair failed and we were unable to recover it. 00:29:10.013 [2024-07-26 11:17:29.404362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.013 [2024-07-26 11:17:29.404393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.013 qpair failed and we were unable to recover it. 00:29:10.013 [2024-07-26 11:17:29.404872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.013 [2024-07-26 11:17:29.404903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.013 qpair failed and we were unable to recover it. 00:29:10.013 [2024-07-26 11:17:29.405457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.013 [2024-07-26 11:17:29.405488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.013 qpair failed and we were unable to recover it. 00:29:10.013 [2024-07-26 11:17:29.406030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.013 [2024-07-26 11:17:29.406070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.013 qpair failed and we were unable to recover it. 00:29:10.013 [2024-07-26 11:17:29.406319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.013 [2024-07-26 11:17:29.406351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.013 qpair failed and we were unable to recover it. 00:29:10.013 [2024-07-26 11:17:29.406907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.013 [2024-07-26 11:17:29.406938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.013 qpair failed and we were unable to recover it. 00:29:10.013 [2024-07-26 11:17:29.407491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.013 [2024-07-26 11:17:29.407523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.013 qpair failed and we were unable to recover it. 00:29:10.013 [2024-07-26 11:17:29.408076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.013 [2024-07-26 11:17:29.408109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.013 qpair failed and we were unable to recover it. 00:29:10.013 [2024-07-26 11:17:29.408612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.013 [2024-07-26 11:17:29.408643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.013 qpair failed and we were unable to recover it. 00:29:10.013 [2024-07-26 11:17:29.409080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.013 [2024-07-26 11:17:29.409113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.013 qpair failed and we were unable to recover it. 00:29:10.013 [2024-07-26 11:17:29.409551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.013 [2024-07-26 11:17:29.409582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.013 qpair failed and we were unable to recover it. 00:29:10.013 [2024-07-26 11:17:29.410146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.013 [2024-07-26 11:17:29.410177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.013 qpair failed and we were unable to recover it. 00:29:10.013 [2024-07-26 11:17:29.410691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.013 [2024-07-26 11:17:29.410722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.013 qpair failed and we were unable to recover it. 00:29:10.013 [2024-07-26 11:17:29.411206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.013 [2024-07-26 11:17:29.411239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.013 qpair failed and we were unable to recover it. 00:29:10.013 [2024-07-26 11:17:29.411467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.013 [2024-07-26 11:17:29.411498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.013 qpair failed and we were unable to recover it. 00:29:10.013 [2024-07-26 11:17:29.412030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.013 [2024-07-26 11:17:29.412070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.013 qpair failed and we were unable to recover it. 00:29:10.013 [2024-07-26 11:17:29.412554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.013 [2024-07-26 11:17:29.412595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.013 qpair failed and we were unable to recover it. 00:29:10.013 [2024-07-26 11:17:29.413033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.013 [2024-07-26 11:17:29.413083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.013 qpair failed and we were unable to recover it. 00:29:10.013 [2024-07-26 11:17:29.413593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.013 [2024-07-26 11:17:29.413624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.013 qpair failed and we were unable to recover it. 00:29:10.013 [2024-07-26 11:17:29.414113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.013 [2024-07-26 11:17:29.414145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.013 qpair failed and we were unable to recover it. 00:29:10.013 [2024-07-26 11:17:29.414622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.013 [2024-07-26 11:17:29.414652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.013 qpair failed and we were unable to recover it. 00:29:10.013 [2024-07-26 11:17:29.415156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.013 [2024-07-26 11:17:29.415187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.013 qpair failed and we were unable to recover it. 00:29:10.013 [2024-07-26 11:17:29.415616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.013 [2024-07-26 11:17:29.415646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.013 qpair failed and we were unable to recover it. 00:29:10.013 [2024-07-26 11:17:29.416468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.013 [2024-07-26 11:17:29.416502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.013 qpair failed and we were unable to recover it. 00:29:10.013 [2024-07-26 11:17:29.417097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.013 [2024-07-26 11:17:29.417128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.013 qpair failed and we were unable to recover it. 00:29:10.014 [2024-07-26 11:17:29.417640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-07-26 11:17:29.417671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-07-26 11:17:29.418188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-07-26 11:17:29.418220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-07-26 11:17:29.418728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-07-26 11:17:29.418759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-07-26 11:17:29.419325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-07-26 11:17:29.419357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-07-26 11:17:29.419871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-07-26 11:17:29.419902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-07-26 11:17:29.420250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-07-26 11:17:29.420282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-07-26 11:17:29.420819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-07-26 11:17:29.420850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-07-26 11:17:29.421466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-07-26 11:17:29.421497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-07-26 11:17:29.422121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-07-26 11:17:29.422153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-07-26 11:17:29.422659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-07-26 11:17:29.422689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-07-26 11:17:29.423281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-07-26 11:17:29.423313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-07-26 11:17:29.423873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-07-26 11:17:29.423904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-07-26 11:17:29.424337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-07-26 11:17:29.424369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-07-26 11:17:29.424855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-07-26 11:17:29.424885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-07-26 11:17:29.425410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-07-26 11:17:29.425440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-07-26 11:17:29.425977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-07-26 11:17:29.426008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-07-26 11:17:29.426451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-07-26 11:17:29.426483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-07-26 11:17:29.427010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-07-26 11:17:29.427040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-07-26 11:17:29.427565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-07-26 11:17:29.427596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-07-26 11:17:29.428137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-07-26 11:17:29.428170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-07-26 11:17:29.428610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-07-26 11:17:29.428642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-07-26 11:17:29.429229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-07-26 11:17:29.429261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-07-26 11:17:29.429867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-07-26 11:17:29.429898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-07-26 11:17:29.430462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-07-26 11:17:29.430493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-07-26 11:17:29.430934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-07-26 11:17:29.430964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-07-26 11:17:29.431523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-07-26 11:17:29.431555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-07-26 11:17:29.432111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-07-26 11:17:29.432143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-07-26 11:17:29.432713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-07-26 11:17:29.432744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-07-26 11:17:29.433238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-07-26 11:17:29.433270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-07-26 11:17:29.433759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-07-26 11:17:29.433791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-07-26 11:17:29.434323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-07-26 11:17:29.434356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-07-26 11:17:29.434952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-07-26 11:17:29.434989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-07-26 11:17:29.435492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-07-26 11:17:29.435524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-07-26 11:17:29.436091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-07-26 11:17:29.436125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-07-26 11:17:29.436681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-07-26 11:17:29.436713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-07-26 11:17:29.437280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-07-26 11:17:29.437311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-07-26 11:17:29.437789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-07-26 11:17:29.437820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.014 qpair failed and we were unable to recover it. 00:29:10.014 [2024-07-26 11:17:29.438378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.014 [2024-07-26 11:17:29.438409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-07-26 11:17:29.438913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-07-26 11:17:29.438944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-07-26 11:17:29.439384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-07-26 11:17:29.439415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-07-26 11:17:29.439905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-07-26 11:17:29.439936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-07-26 11:17:29.440469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-07-26 11:17:29.440500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-07-26 11:17:29.441089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-07-26 11:17:29.441121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-07-26 11:17:29.441830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-07-26 11:17:29.441863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-07-26 11:17:29.442476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-07-26 11:17:29.442508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-07-26 11:17:29.443127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-07-26 11:17:29.443161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-07-26 11:17:29.443600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-07-26 11:17:29.443631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-07-26 11:17:29.444170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-07-26 11:17:29.444202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-07-26 11:17:29.445021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-07-26 11:17:29.445061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-07-26 11:17:29.445641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-07-26 11:17:29.445674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-07-26 11:17:29.446132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-07-26 11:17:29.446165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-07-26 11:17:29.446598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-07-26 11:17:29.446629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-07-26 11:17:29.447214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-07-26 11:17:29.447245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-07-26 11:17:29.448039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-07-26 11:17:29.448080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-07-26 11:17:29.448591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-07-26 11:17:29.448623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-07-26 11:17:29.449200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-07-26 11:17:29.449233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-07-26 11:17:29.449791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-07-26 11:17:29.449822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-07-26 11:17:29.450383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-07-26 11:17:29.450414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-07-26 11:17:29.451055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-07-26 11:17:29.451087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-07-26 11:17:29.451595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-07-26 11:17:29.451626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-07-26 11:17:29.452063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-07-26 11:17:29.452096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-07-26 11:17:29.452633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-07-26 11:17:29.452664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-07-26 11:17:29.453268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-07-26 11:17:29.453300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-07-26 11:17:29.453820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-07-26 11:17:29.453852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-07-26 11:17:29.454351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-07-26 11:17:29.454383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-07-26 11:17:29.454931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-07-26 11:17:29.454963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-07-26 11:17:29.455476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-07-26 11:17:29.455509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-07-26 11:17:29.456131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-07-26 11:17:29.456165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-07-26 11:17:29.456602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-07-26 11:17:29.456632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-07-26 11:17:29.457137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-07-26 11:17:29.457170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-07-26 11:17:29.457742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-07-26 11:17:29.457773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-07-26 11:17:29.458398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-07-26 11:17:29.458435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-07-26 11:17:29.459003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-07-26 11:17:29.459035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-07-26 11:17:29.459612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-07-26 11:17:29.459644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-07-26 11:17:29.460205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.015 [2024-07-26 11:17:29.460238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.015 qpair failed and we were unable to recover it. 00:29:10.015 [2024-07-26 11:17:29.460675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-07-26 11:17:29.460707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-07-26 11:17:29.461200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-07-26 11:17:29.461232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-07-26 11:17:29.461812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-07-26 11:17:29.461843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-07-26 11:17:29.462394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-07-26 11:17:29.462426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-07-26 11:17:29.463064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-07-26 11:17:29.463095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-07-26 11:17:29.463618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-07-26 11:17:29.463649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-07-26 11:17:29.464146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-07-26 11:17:29.464178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-07-26 11:17:29.464680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-07-26 11:17:29.464711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-07-26 11:17:29.465223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-07-26 11:17:29.465254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-07-26 11:17:29.465820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-07-26 11:17:29.465851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-07-26 11:17:29.466443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-07-26 11:17:29.466477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-07-26 11:17:29.467028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-07-26 11:17:29.467073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-07-26 11:17:29.467660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-07-26 11:17:29.467691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-07-26 11:17:29.468249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-07-26 11:17:29.468282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-07-26 11:17:29.468808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-07-26 11:17:29.468846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-07-26 11:17:29.469377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-07-26 11:17:29.469393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-07-26 11:17:29.469892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-07-26 11:17:29.469922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-07-26 11:17:29.470474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-07-26 11:17:29.470506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-07-26 11:17:29.471019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-07-26 11:17:29.471035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-07-26 11:17:29.471511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-07-26 11:17:29.471543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-07-26 11:17:29.471997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-07-26 11:17:29.472028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-07-26 11:17:29.472627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-07-26 11:17:29.472659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-07-26 11:17:29.473225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-07-26 11:17:29.473257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-07-26 11:17:29.473691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-07-26 11:17:29.473722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-07-26 11:17:29.474267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-07-26 11:17:29.474282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-07-26 11:17:29.474833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-07-26 11:17:29.474864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-07-26 11:17:29.475431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-07-26 11:17:29.475463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-07-26 11:17:29.475953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-07-26 11:17:29.475986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-07-26 11:17:29.476550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-07-26 11:17:29.476581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-07-26 11:17:29.477159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-07-26 11:17:29.477193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-07-26 11:17:29.477746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-07-26 11:17:29.477779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-07-26 11:17:29.478403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.016 [2024-07-26 11:17:29.478436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.016 qpair failed and we were unable to recover it. 00:29:10.016 [2024-07-26 11:17:29.479002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-07-26 11:17:29.479034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.017 qpair failed and we were unable to recover it. 00:29:10.017 [2024-07-26 11:17:29.479555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-07-26 11:17:29.479585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.017 qpair failed and we were unable to recover it. 00:29:10.017 [2024-07-26 11:17:29.480051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-07-26 11:17:29.480083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.017 qpair failed and we were unable to recover it. 00:29:10.017 [2024-07-26 11:17:29.480593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-07-26 11:17:29.480625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.017 qpair failed and we were unable to recover it. 00:29:10.017 [2024-07-26 11:17:29.481214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-07-26 11:17:29.481253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.017 qpair failed and we were unable to recover it. 00:29:10.017 [2024-07-26 11:17:29.481877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.017 [2024-07-26 11:17:29.481892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.017 qpair failed and we were unable to recover it. 00:29:10.285 [2024-07-26 11:17:29.482709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.285 [2024-07-26 11:17:29.482787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.285 qpair failed and we were unable to recover it. 00:29:10.285 [2024-07-26 11:17:29.483427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.285 [2024-07-26 11:17:29.483468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.285 qpair failed and we were unable to recover it. 00:29:10.285 [2024-07-26 11:17:29.483963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.285 [2024-07-26 11:17:29.483980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.285 qpair failed and we were unable to recover it. 00:29:10.285 [2024-07-26 11:17:29.484493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.285 [2024-07-26 11:17:29.484527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.285 qpair failed and we were unable to recover it. 00:29:10.285 [2024-07-26 11:17:29.485074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.285 [2024-07-26 11:17:29.485107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.285 qpair failed and we were unable to recover it. 00:29:10.285 [2024-07-26 11:17:29.485680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.285 [2024-07-26 11:17:29.485711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.285 qpair failed and we were unable to recover it. 00:29:10.285 [2024-07-26 11:17:29.486277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.285 [2024-07-26 11:17:29.486309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.285 qpair failed and we were unable to recover it. 00:29:10.285 [2024-07-26 11:17:29.486835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.285 [2024-07-26 11:17:29.486866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.285 qpair failed and we were unable to recover it. 00:29:10.285 [2024-07-26 11:17:29.487355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.285 [2024-07-26 11:17:29.487399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.285 qpair failed and we were unable to recover it. 00:29:10.285 [2024-07-26 11:17:29.487831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.285 [2024-07-26 11:17:29.487847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.285 qpair failed and we were unable to recover it. 00:29:10.285 [2024-07-26 11:17:29.488321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.285 [2024-07-26 11:17:29.488354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.285 qpair failed and we were unable to recover it. 00:29:10.285 [2024-07-26 11:17:29.488792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.285 [2024-07-26 11:17:29.488825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.285 qpair failed and we were unable to recover it. 00:29:10.285 [2024-07-26 11:17:29.489405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.285 [2024-07-26 11:17:29.489438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.285 qpair failed and we were unable to recover it. 00:29:10.285 [2024-07-26 11:17:29.490033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.285 [2024-07-26 11:17:29.490090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.286 qpair failed and we were unable to recover it. 00:29:10.286 [2024-07-26 11:17:29.490535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.286 [2024-07-26 11:17:29.490566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.286 qpair failed and we were unable to recover it. 00:29:10.286 [2024-07-26 11:17:29.491119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.286 [2024-07-26 11:17:29.491152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.286 qpair failed and we were unable to recover it. 00:29:10.286 [2024-07-26 11:17:29.491865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.286 [2024-07-26 11:17:29.491899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.286 qpair failed and we were unable to recover it. 00:29:10.286 [2024-07-26 11:17:29.492448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.286 [2024-07-26 11:17:29.492481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.286 qpair failed and we were unable to recover it. 00:29:10.286 [2024-07-26 11:17:29.493118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.286 [2024-07-26 11:17:29.493151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.286 qpair failed and we were unable to recover it. 00:29:10.286 [2024-07-26 11:17:29.493660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.286 [2024-07-26 11:17:29.493690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.286 qpair failed and we were unable to recover it. 00:29:10.286 [2024-07-26 11:17:29.494187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.286 [2024-07-26 11:17:29.494219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.286 qpair failed and we were unable to recover it. 00:29:10.286 [2024-07-26 11:17:29.494746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.286 [2024-07-26 11:17:29.494778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.286 qpair failed and we were unable to recover it. 00:29:10.286 [2024-07-26 11:17:29.495410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.286 [2024-07-26 11:17:29.495444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.286 qpair failed and we were unable to recover it. 00:29:10.286 [2024-07-26 11:17:29.496302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.286 [2024-07-26 11:17:29.496338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.286 qpair failed and we were unable to recover it. 00:29:10.286 [2024-07-26 11:17:29.496848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.286 [2024-07-26 11:17:29.496879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.286 qpair failed and we were unable to recover it. 00:29:10.286 [2024-07-26 11:17:29.497406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.286 [2024-07-26 11:17:29.497440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.286 qpair failed and we were unable to recover it. 00:29:10.286 [2024-07-26 11:17:29.498007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.286 [2024-07-26 11:17:29.498039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.286 qpair failed and we were unable to recover it. 00:29:10.286 [2024-07-26 11:17:29.498653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.286 [2024-07-26 11:17:29.498685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.286 qpair failed and we were unable to recover it. 00:29:10.286 [2024-07-26 11:17:29.499159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.286 [2024-07-26 11:17:29.499175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.286 qpair failed and we were unable to recover it. 00:29:10.286 [2024-07-26 11:17:29.499603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.286 [2024-07-26 11:17:29.499619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.286 qpair failed and we were unable to recover it. 00:29:10.286 [2024-07-26 11:17:29.500178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.286 [2024-07-26 11:17:29.500194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.286 qpair failed and we were unable to recover it. 00:29:10.286 [2024-07-26 11:17:29.500691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.286 [2024-07-26 11:17:29.500723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.286 qpair failed and we were unable to recover it. 00:29:10.286 [2024-07-26 11:17:29.501225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.286 [2024-07-26 11:17:29.501241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.286 qpair failed and we were unable to recover it. 00:29:10.286 [2024-07-26 11:17:29.501771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.286 [2024-07-26 11:17:29.501803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.286 qpair failed and we were unable to recover it. 00:29:10.286 [2024-07-26 11:17:29.502567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.286 [2024-07-26 11:17:29.502602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.286 qpair failed and we were unable to recover it. 00:29:10.286 [2024-07-26 11:17:29.503193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.286 [2024-07-26 11:17:29.503225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.286 qpair failed and we were unable to recover it. 00:29:10.286 [2024-07-26 11:17:29.503722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.286 [2024-07-26 11:17:29.503753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.286 qpair failed and we were unable to recover it. 00:29:10.286 [2024-07-26 11:17:29.504297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.286 [2024-07-26 11:17:29.504329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.286 qpair failed and we were unable to recover it. 00:29:10.286 [2024-07-26 11:17:29.504923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.286 [2024-07-26 11:17:29.504955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.286 qpair failed and we were unable to recover it. 00:29:10.286 [2024-07-26 11:17:29.505527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.286 [2024-07-26 11:17:29.505543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.286 qpair failed and we were unable to recover it. 00:29:10.286 [2024-07-26 11:17:29.506053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.286 [2024-07-26 11:17:29.506070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.286 qpair failed and we were unable to recover it. 00:29:10.286 [2024-07-26 11:17:29.506577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.286 [2024-07-26 11:17:29.506593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.286 qpair failed and we were unable to recover it. 00:29:10.286 [2024-07-26 11:17:29.507138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.286 [2024-07-26 11:17:29.507155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.286 qpair failed and we were unable to recover it. 00:29:10.286 [2024-07-26 11:17:29.507584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.286 [2024-07-26 11:17:29.507616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.286 qpair failed and we were unable to recover it. 00:29:10.286 [2024-07-26 11:17:29.508130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.286 [2024-07-26 11:17:29.508146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.286 qpair failed and we were unable to recover it. 00:29:10.286 [2024-07-26 11:17:29.508565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.286 [2024-07-26 11:17:29.508596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.286 qpair failed and we were unable to recover it. 00:29:10.286 [2024-07-26 11:17:29.509110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.286 [2024-07-26 11:17:29.509149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.286 qpair failed and we were unable to recover it. 00:29:10.286 [2024-07-26 11:17:29.509561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.286 [2024-07-26 11:17:29.509576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.286 qpair failed and we were unable to recover it. 00:29:10.286 [2024-07-26 11:17:29.510064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.286 [2024-07-26 11:17:29.510080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.286 qpair failed and we were unable to recover it. 00:29:10.286 [2024-07-26 11:17:29.510530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.286 [2024-07-26 11:17:29.510561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.286 qpair failed and we were unable to recover it. 00:29:10.286 [2024-07-26 11:17:29.511184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.287 [2024-07-26 11:17:29.511200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.287 qpair failed and we were unable to recover it. 00:29:10.287 [2024-07-26 11:17:29.511672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.287 [2024-07-26 11:17:29.511703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.287 qpair failed and we were unable to recover it. 00:29:10.287 [2024-07-26 11:17:29.512223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.287 [2024-07-26 11:17:29.512239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.287 qpair failed and we were unable to recover it. 00:29:10.287 [2024-07-26 11:17:29.512663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.287 [2024-07-26 11:17:29.512695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.287 qpair failed and we were unable to recover it. 00:29:10.287 [2024-07-26 11:17:29.513203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.287 [2024-07-26 11:17:29.513219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.287 qpair failed and we were unable to recover it. 00:29:10.287 [2024-07-26 11:17:29.513625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.287 [2024-07-26 11:17:29.513641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.287 qpair failed and we were unable to recover it. 00:29:10.287 [2024-07-26 11:17:29.514403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.287 [2024-07-26 11:17:29.514421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.287 qpair failed and we were unable to recover it. 00:29:10.287 [2024-07-26 11:17:29.515480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.287 [2024-07-26 11:17:29.515508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.287 qpair failed and we were unable to recover it. 00:29:10.287 [2024-07-26 11:17:29.516015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.287 [2024-07-26 11:17:29.516059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.287 qpair failed and we were unable to recover it. 00:29:10.287 [2024-07-26 11:17:29.516503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.287 [2024-07-26 11:17:29.516536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.287 qpair failed and we were unable to recover it. 00:29:10.287 [2024-07-26 11:17:29.517025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.287 [2024-07-26 11:17:29.517041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.287 qpair failed and we were unable to recover it. 00:29:10.287 [2024-07-26 11:17:29.517509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.287 [2024-07-26 11:17:29.517525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.287 qpair failed and we were unable to recover it. 00:29:10.287 [2024-07-26 11:17:29.518003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.287 [2024-07-26 11:17:29.518019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.287 qpair failed and we were unable to recover it. 00:29:10.287 [2024-07-26 11:17:29.518450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.287 [2024-07-26 11:17:29.518483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.287 qpair failed and we were unable to recover it. 00:29:10.287 [2024-07-26 11:17:29.519121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.287 [2024-07-26 11:17:29.519155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.287 qpair failed and we were unable to recover it. 00:29:10.287 [2024-07-26 11:17:29.519664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.287 [2024-07-26 11:17:29.519703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.287 qpair failed and we were unable to recover it. 00:29:10.287 [2024-07-26 11:17:29.520273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.287 [2024-07-26 11:17:29.520307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.287 qpair failed and we were unable to recover it. 00:29:10.287 [2024-07-26 11:17:29.520813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.287 [2024-07-26 11:17:29.520844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.287 qpair failed and we were unable to recover it. 00:29:10.287 [2024-07-26 11:17:29.521409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.287 [2024-07-26 11:17:29.521442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.287 qpair failed and we were unable to recover it. 00:29:10.287 [2024-07-26 11:17:29.522059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.287 [2024-07-26 11:17:29.522093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.287 qpair failed and we were unable to recover it. 00:29:10.287 [2024-07-26 11:17:29.523244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.287 [2024-07-26 11:17:29.523272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.287 qpair failed and we were unable to recover it. 00:29:10.287 [2024-07-26 11:17:29.523726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.287 [2024-07-26 11:17:29.523761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.287 qpair failed and we were unable to recover it. 00:29:10.287 [2024-07-26 11:17:29.524301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.287 [2024-07-26 11:17:29.524319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.287 qpair failed and we were unable to recover it. 00:29:10.287 [2024-07-26 11:17:29.524809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.287 [2024-07-26 11:17:29.524841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.287 qpair failed and we were unable to recover it. 00:29:10.287 [2024-07-26 11:17:29.525358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.287 [2024-07-26 11:17:29.525374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.287 qpair failed and we were unable to recover it. 00:29:10.287 [2024-07-26 11:17:29.525890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.287 [2024-07-26 11:17:29.525922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.287 qpair failed and we were unable to recover it. 00:29:10.287 [2024-07-26 11:17:29.526418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.287 [2024-07-26 11:17:29.526451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.287 qpair failed and we were unable to recover it. 00:29:10.287 [2024-07-26 11:17:29.526935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.287 [2024-07-26 11:17:29.526966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.287 qpair failed and we were unable to recover it. 00:29:10.287 [2024-07-26 11:17:29.527541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.287 [2024-07-26 11:17:29.527558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.287 qpair failed and we were unable to recover it. 00:29:10.287 [2024-07-26 11:17:29.528120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.287 [2024-07-26 11:17:29.528137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.287 qpair failed and we were unable to recover it. 00:29:10.287 [2024-07-26 11:17:29.528921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.287 [2024-07-26 11:17:29.528937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.287 qpair failed and we were unable to recover it. 00:29:10.287 [2024-07-26 11:17:29.529386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.287 [2024-07-26 11:17:29.529419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.287 qpair failed and we were unable to recover it. 00:29:10.287 [2024-07-26 11:17:29.529941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.287 [2024-07-26 11:17:29.529957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.287 qpair failed and we were unable to recover it. 00:29:10.287 [2024-07-26 11:17:29.530426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.287 [2024-07-26 11:17:29.530443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.287 qpair failed and we were unable to recover it. 00:29:10.287 [2024-07-26 11:17:29.530838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.287 [2024-07-26 11:17:29.530854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.287 qpair failed and we were unable to recover it. 00:29:10.287 [2024-07-26 11:17:29.531418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.287 [2024-07-26 11:17:29.531451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.287 qpair failed and we were unable to recover it. 00:29:10.287 [2024-07-26 11:17:29.531882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.287 [2024-07-26 11:17:29.531914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.287 qpair failed and we were unable to recover it. 00:29:10.287 [2024-07-26 11:17:29.532447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.287 [2024-07-26 11:17:29.532481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.288 qpair failed and we were unable to recover it. 00:29:10.288 [2024-07-26 11:17:29.533028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.288 [2024-07-26 11:17:29.533051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.288 qpair failed and we were unable to recover it. 00:29:10.288 [2024-07-26 11:17:29.533590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.288 [2024-07-26 11:17:29.533621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.288 qpair failed and we were unable to recover it. 00:29:10.288 [2024-07-26 11:17:29.534164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.288 [2024-07-26 11:17:29.534181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.288 qpair failed and we were unable to recover it. 00:29:10.288 [2024-07-26 11:17:29.534691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.288 [2024-07-26 11:17:29.534707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.288 qpair failed and we were unable to recover it. 00:29:10.288 [2024-07-26 11:17:29.535179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.288 [2024-07-26 11:17:29.535212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.288 qpair failed and we were unable to recover it. 00:29:10.288 [2024-07-26 11:17:29.535630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.288 [2024-07-26 11:17:29.535661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.288 qpair failed and we were unable to recover it. 00:29:10.288 [2024-07-26 11:17:29.536263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.288 [2024-07-26 11:17:29.536280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.288 qpair failed and we were unable to recover it. 00:29:10.288 [2024-07-26 11:17:29.536745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.288 [2024-07-26 11:17:29.536762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.288 qpair failed and we were unable to recover it. 00:29:10.288 [2024-07-26 11:17:29.537275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.288 [2024-07-26 11:17:29.537308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.288 qpair failed and we were unable to recover it. 00:29:10.288 [2024-07-26 11:17:29.537877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.288 [2024-07-26 11:17:29.537908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.288 qpair failed and we were unable to recover it. 00:29:10.288 [2024-07-26 11:17:29.538446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.288 [2024-07-26 11:17:29.538479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.288 qpair failed and we were unable to recover it. 00:29:10.288 [2024-07-26 11:17:29.538925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.288 [2024-07-26 11:17:29.538958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.288 qpair failed and we were unable to recover it. 00:29:10.288 [2024-07-26 11:17:29.539459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.288 [2024-07-26 11:17:29.539494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.288 qpair failed and we were unable to recover it. 00:29:10.288 [2024-07-26 11:17:29.539954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.288 [2024-07-26 11:17:29.539971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.288 qpair failed and we were unable to recover it. 00:29:10.288 [2024-07-26 11:17:29.540513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.288 [2024-07-26 11:17:29.540530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.288 qpair failed and we were unable to recover it. 00:29:10.288 [2024-07-26 11:17:29.541022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.288 [2024-07-26 11:17:29.541091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.288 qpair failed and we were unable to recover it. 00:29:10.288 [2024-07-26 11:17:29.541548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.288 [2024-07-26 11:17:29.541582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.288 qpair failed and we were unable to recover it. 00:29:10.288 [2024-07-26 11:17:29.542091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.288 [2024-07-26 11:17:29.542131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.288 qpair failed and we were unable to recover it. 00:29:10.288 [2024-07-26 11:17:29.542561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.288 [2024-07-26 11:17:29.542593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.288 qpair failed and we were unable to recover it. 00:29:10.288 [2024-07-26 11:17:29.543194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.288 [2024-07-26 11:17:29.543227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.288 qpair failed and we were unable to recover it. 00:29:10.288 [2024-07-26 11:17:29.543712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.288 [2024-07-26 11:17:29.543745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.288 qpair failed and we were unable to recover it. 00:29:10.288 [2024-07-26 11:17:29.544313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.288 [2024-07-26 11:17:29.544329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.288 qpair failed and we were unable to recover it. 00:29:10.288 [2024-07-26 11:17:29.544811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.288 [2024-07-26 11:17:29.544843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.288 qpair failed and we were unable to recover it. 00:29:10.288 [2024-07-26 11:17:29.545342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.288 [2024-07-26 11:17:29.545374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.288 qpair failed and we were unable to recover it. 00:29:10.288 [2024-07-26 11:17:29.545912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.288 [2024-07-26 11:17:29.545943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.288 qpair failed and we were unable to recover it. 00:29:10.288 [2024-07-26 11:17:29.546449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.288 [2024-07-26 11:17:29.546482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.288 qpair failed and we were unable to recover it. 00:29:10.288 [2024-07-26 11:17:29.547004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.288 [2024-07-26 11:17:29.547036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.288 qpair failed and we were unable to recover it. 00:29:10.288 [2024-07-26 11:17:29.547481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.288 [2024-07-26 11:17:29.547514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.288 qpair failed and we were unable to recover it. 00:29:10.288 [2024-07-26 11:17:29.548073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.288 [2024-07-26 11:17:29.548107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.288 qpair failed and we were unable to recover it. 00:29:10.288 [2024-07-26 11:17:29.548646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.288 [2024-07-26 11:17:29.548678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.288 qpair failed and we were unable to recover it. 00:29:10.288 [2024-07-26 11:17:29.549130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.288 [2024-07-26 11:17:29.549163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.288 qpair failed and we were unable to recover it. 00:29:10.288 [2024-07-26 11:17:29.549620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.288 [2024-07-26 11:17:29.549654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.288 qpair failed and we were unable to recover it. 00:29:10.288 [2024-07-26 11:17:29.550150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.288 [2024-07-26 11:17:29.550193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.288 qpair failed and we were unable to recover it. 00:29:10.288 [2024-07-26 11:17:29.550605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.288 [2024-07-26 11:17:29.550621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.288 qpair failed and we were unable to recover it. 00:29:10.288 [2024-07-26 11:17:29.551088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.288 [2024-07-26 11:17:29.551122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.288 qpair failed and we were unable to recover it. 00:29:10.288 [2024-07-26 11:17:29.551658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.288 [2024-07-26 11:17:29.551690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.288 qpair failed and we were unable to recover it. 00:29:10.288 [2024-07-26 11:17:29.552246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.288 [2024-07-26 11:17:29.552279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.288 qpair failed and we were unable to recover it. 00:29:10.289 [2024-07-26 11:17:29.552829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.289 [2024-07-26 11:17:29.552861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.289 qpair failed and we were unable to recover it. 00:29:10.289 [2024-07-26 11:17:29.553379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.289 [2024-07-26 11:17:29.553413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.289 qpair failed and we were unable to recover it. 00:29:10.289 [2024-07-26 11:17:29.553901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.289 [2024-07-26 11:17:29.553933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.289 qpair failed and we were unable to recover it. 00:29:10.289 [2024-07-26 11:17:29.554442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.289 [2024-07-26 11:17:29.554475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.289 qpair failed and we were unable to recover it. 00:29:10.289 [2024-07-26 11:17:29.554993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.289 [2024-07-26 11:17:29.555025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.289 qpair failed and we were unable to recover it. 00:29:10.289 [2024-07-26 11:17:29.555550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.289 [2024-07-26 11:17:29.555583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.289 qpair failed and we were unable to recover it. 00:29:10.289 [2024-07-26 11:17:29.556099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.289 [2024-07-26 11:17:29.556132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.289 qpair failed and we were unable to recover it. 00:29:10.289 [2024-07-26 11:17:29.556702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.289 [2024-07-26 11:17:29.556735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.289 qpair failed and we were unable to recover it. 00:29:10.289 [2024-07-26 11:17:29.557319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.289 [2024-07-26 11:17:29.557352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.289 qpair failed and we were unable to recover it. 00:29:10.289 [2024-07-26 11:17:29.557931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.289 [2024-07-26 11:17:29.557963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.289 qpair failed and we were unable to recover it. 00:29:10.289 [2024-07-26 11:17:29.558506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.289 [2024-07-26 11:17:29.558540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.289 qpair failed and we were unable to recover it. 00:29:10.289 [2024-07-26 11:17:29.559132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.289 [2024-07-26 11:17:29.559166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.289 qpair failed and we were unable to recover it. 00:29:10.289 [2024-07-26 11:17:29.559685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.289 [2024-07-26 11:17:29.559716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.289 qpair failed and we were unable to recover it. 00:29:10.289 [2024-07-26 11:17:29.560310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.289 [2024-07-26 11:17:29.560343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.289 qpair failed and we were unable to recover it. 00:29:10.289 [2024-07-26 11:17:29.560897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.289 [2024-07-26 11:17:29.560930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.289 qpair failed and we were unable to recover it. 00:29:10.289 [2024-07-26 11:17:29.561482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.289 [2024-07-26 11:17:29.561515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.289 qpair failed and we were unable to recover it. 00:29:10.289 [2024-07-26 11:17:29.562237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.289 [2024-07-26 11:17:29.562272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.289 qpair failed and we were unable to recover it. 00:29:10.289 [2024-07-26 11:17:29.562726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.289 [2024-07-26 11:17:29.562759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.289 qpair failed and we were unable to recover it. 00:29:10.289 [2024-07-26 11:17:29.563296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.289 [2024-07-26 11:17:29.563330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.289 qpair failed and we were unable to recover it. 00:29:10.289 [2024-07-26 11:17:29.563910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.289 [2024-07-26 11:17:29.563943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.289 qpair failed and we were unable to recover it. 00:29:10.289 [2024-07-26 11:17:29.564454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.289 [2024-07-26 11:17:29.564493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.289 qpair failed and we were unable to recover it. 00:29:10.289 [2024-07-26 11:17:29.565013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.289 [2024-07-26 11:17:29.565065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.289 qpair failed and we were unable to recover it. 00:29:10.289 [2024-07-26 11:17:29.565573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.289 [2024-07-26 11:17:29.565605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.289 qpair failed and we were unable to recover it. 00:29:10.289 [2024-07-26 11:17:29.566198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.289 [2024-07-26 11:17:29.566232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.289 qpair failed and we were unable to recover it. 00:29:10.289 [2024-07-26 11:17:29.566737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.289 [2024-07-26 11:17:29.566768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.289 qpair failed and we were unable to recover it. 00:29:10.289 [2024-07-26 11:17:29.567331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.289 [2024-07-26 11:17:29.567365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.289 qpair failed and we were unable to recover it. 00:29:10.289 [2024-07-26 11:17:29.568091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.289 [2024-07-26 11:17:29.568127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.289 qpair failed and we were unable to recover it. 00:29:10.289 [2024-07-26 11:17:29.568650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.289 [2024-07-26 11:17:29.568681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.289 qpair failed and we were unable to recover it. 00:29:10.289 [2024-07-26 11:17:29.569281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.289 [2024-07-26 11:17:29.569314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.289 qpair failed and we were unable to recover it. 00:29:10.289 [2024-07-26 11:17:29.569820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.289 [2024-07-26 11:17:29.569852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.289 qpair failed and we were unable to recover it. 00:29:10.289 [2024-07-26 11:17:29.570401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.289 [2024-07-26 11:17:29.570435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.289 qpair failed and we were unable to recover it. 00:29:10.289 [2024-07-26 11:17:29.570884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.289 [2024-07-26 11:17:29.570916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.289 qpair failed and we were unable to recover it. 00:29:10.289 [2024-07-26 11:17:29.571518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.289 [2024-07-26 11:17:29.571551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.289 qpair failed and we were unable to recover it. 00:29:10.289 [2024-07-26 11:17:29.572155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.289 [2024-07-26 11:17:29.572188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.289 qpair failed and we were unable to recover it. 00:29:10.289 [2024-07-26 11:17:29.572644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.289 [2024-07-26 11:17:29.572677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.289 qpair failed and we were unable to recover it. 00:29:10.289 [2024-07-26 11:17:29.573495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.289 [2024-07-26 11:17:29.573530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.289 qpair failed and we were unable to recover it. 00:29:10.289 [2024-07-26 11:17:29.574071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.289 [2024-07-26 11:17:29.574105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.289 qpair failed and we were unable to recover it. 00:29:10.290 [2024-07-26 11:17:29.574684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.290 [2024-07-26 11:17:29.574716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.290 qpair failed and we were unable to recover it. 00:29:10.290 [2024-07-26 11:17:29.575310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.290 [2024-07-26 11:17:29.575327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.290 qpair failed and we were unable to recover it. 00:29:10.290 [2024-07-26 11:17:29.575838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.290 [2024-07-26 11:17:29.575870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.290 qpair failed and we were unable to recover it. 00:29:10.290 [2024-07-26 11:17:29.576371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.290 [2024-07-26 11:17:29.576405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.290 qpair failed and we were unable to recover it. 00:29:10.290 [2024-07-26 11:17:29.576858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.290 [2024-07-26 11:17:29.576891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.290 qpair failed and we were unable to recover it. 00:29:10.290 [2024-07-26 11:17:29.577387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.290 [2024-07-26 11:17:29.577420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.290 qpair failed and we were unable to recover it. 00:29:10.290 [2024-07-26 11:17:29.577929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.290 [2024-07-26 11:17:29.577961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.290 qpair failed and we were unable to recover it. 00:29:10.290 [2024-07-26 11:17:29.578554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.290 [2024-07-26 11:17:29.578588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.290 qpair failed and we were unable to recover it. 00:29:10.290 [2024-07-26 11:17:29.579183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.290 [2024-07-26 11:17:29.579217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.290 qpair failed and we were unable to recover it. 00:29:10.290 [2024-07-26 11:17:29.579738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.290 [2024-07-26 11:17:29.579771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.290 qpair failed and we were unable to recover it. 00:29:10.290 [2024-07-26 11:17:29.580364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.290 [2024-07-26 11:17:29.580397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.290 qpair failed and we were unable to recover it. 00:29:10.290 [2024-07-26 11:17:29.580906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.290 [2024-07-26 11:17:29.580939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.290 qpair failed and we were unable to recover it. 00:29:10.290 [2024-07-26 11:17:29.581445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.290 [2024-07-26 11:17:29.581462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.290 qpair failed and we were unable to recover it. 00:29:10.290 [2024-07-26 11:17:29.581953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.290 [2024-07-26 11:17:29.581971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.290 qpair failed and we were unable to recover it. 00:29:10.290 [2024-07-26 11:17:29.582468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.290 [2024-07-26 11:17:29.582501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.290 qpair failed and we were unable to recover it. 00:29:10.290 [2024-07-26 11:17:29.583067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.290 [2024-07-26 11:17:29.583100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.290 qpair failed and we were unable to recover it. 00:29:10.290 [2024-07-26 11:17:29.583619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.290 [2024-07-26 11:17:29.583651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.290 qpair failed and we were unable to recover it. 00:29:10.290 [2024-07-26 11:17:29.584308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.290 [2024-07-26 11:17:29.584342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.290 qpair failed and we were unable to recover it. 00:29:10.290 [2024-07-26 11:17:29.584933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.290 [2024-07-26 11:17:29.584965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.290 qpair failed and we were unable to recover it. 00:29:10.290 [2024-07-26 11:17:29.585510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.290 [2024-07-26 11:17:29.585544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.290 qpair failed and we were unable to recover it. 00:29:10.290 [2024-07-26 11:17:29.586165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.290 [2024-07-26 11:17:29.586198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.290 qpair failed and we were unable to recover it. 00:29:10.290 [2024-07-26 11:17:29.586788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.290 [2024-07-26 11:17:29.586820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.290 qpair failed and we were unable to recover it. 00:29:10.290 [2024-07-26 11:17:29.587396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.290 [2024-07-26 11:17:29.587429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.290 qpair failed and we were unable to recover it. 00:29:10.290 [2024-07-26 11:17:29.587933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.290 [2024-07-26 11:17:29.587970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.290 qpair failed and we were unable to recover it. 00:29:10.290 [2024-07-26 11:17:29.588497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.290 [2024-07-26 11:17:29.588533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.290 qpair failed and we were unable to recover it. 00:29:10.290 [2024-07-26 11:17:29.589149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.290 [2024-07-26 11:17:29.589182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.290 qpair failed and we were unable to recover it. 00:29:10.290 [2024-07-26 11:17:29.589773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.290 [2024-07-26 11:17:29.589806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.290 qpair failed and we were unable to recover it. 00:29:10.291 [2024-07-26 11:17:29.590414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.291 [2024-07-26 11:17:29.590447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.291 qpair failed and we were unable to recover it. 00:29:10.291 [2024-07-26 11:17:29.590976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.291 [2024-07-26 11:17:29.591009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.291 qpair failed and we were unable to recover it. 00:29:10.291 [2024-07-26 11:17:29.591574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.291 [2024-07-26 11:17:29.591607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.291 qpair failed and we were unable to recover it. 00:29:10.291 [2024-07-26 11:17:29.592225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.291 [2024-07-26 11:17:29.592259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.291 qpair failed and we were unable to recover it. 00:29:10.291 [2024-07-26 11:17:29.592776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.291 [2024-07-26 11:17:29.592808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.291 qpair failed and we were unable to recover it. 00:29:10.291 [2024-07-26 11:17:29.593311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.291 [2024-07-26 11:17:29.593359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.291 qpair failed and we were unable to recover it. 00:29:10.291 [2024-07-26 11:17:29.593786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.291 [2024-07-26 11:17:29.593819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.291 qpair failed and we were unable to recover it. 00:29:10.291 [2024-07-26 11:17:29.594418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.291 [2024-07-26 11:17:29.594451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.291 qpair failed and we were unable to recover it. 00:29:10.291 [2024-07-26 11:17:29.594961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.291 [2024-07-26 11:17:29.594993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.291 qpair failed and we were unable to recover it. 00:29:10.291 [2024-07-26 11:17:29.595542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.291 [2024-07-26 11:17:29.595576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.291 qpair failed and we were unable to recover it. 00:29:10.291 [2024-07-26 11:17:29.596163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.291 [2024-07-26 11:17:29.596198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.291 qpair failed and we were unable to recover it. 00:29:10.291 [2024-07-26 11:17:29.596819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.291 [2024-07-26 11:17:29.596851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.291 qpair failed and we were unable to recover it. 00:29:10.291 [2024-07-26 11:17:29.597441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.291 [2024-07-26 11:17:29.597458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.291 qpair failed and we were unable to recover it. 00:29:10.291 [2024-07-26 11:17:29.597929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.291 [2024-07-26 11:17:29.597961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.291 qpair failed and we were unable to recover it. 00:29:10.291 [2024-07-26 11:17:29.598462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.291 [2024-07-26 11:17:29.598494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.291 qpair failed and we were unable to recover it. 00:29:10.291 [2024-07-26 11:17:29.599166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.291 [2024-07-26 11:17:29.599199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.291 qpair failed and we were unable to recover it. 00:29:10.291 [2024-07-26 11:17:29.599649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.291 [2024-07-26 11:17:29.599681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.291 qpair failed and we were unable to recover it. 00:29:10.291 [2024-07-26 11:17:29.600246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.291 [2024-07-26 11:17:29.600279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.291 qpair failed and we were unable to recover it. 00:29:10.291 [2024-07-26 11:17:29.600720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.291 [2024-07-26 11:17:29.600752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.291 qpair failed and we were unable to recover it. 00:29:10.291 [2024-07-26 11:17:29.601228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.291 [2024-07-26 11:17:29.601262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.291 qpair failed and we were unable to recover it. 00:29:10.291 [2024-07-26 11:17:29.601719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.291 [2024-07-26 11:17:29.601751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.291 qpair failed and we were unable to recover it. 00:29:10.291 [2024-07-26 11:17:29.602313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.291 [2024-07-26 11:17:29.602348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.291 qpair failed and we were unable to recover it. 00:29:10.291 [2024-07-26 11:17:29.602900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.291 [2024-07-26 11:17:29.602932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.291 qpair failed and we were unable to recover it. 00:29:10.291 [2024-07-26 11:17:29.603530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.291 [2024-07-26 11:17:29.603563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.291 qpair failed and we were unable to recover it. 00:29:10.291 [2024-07-26 11:17:29.604087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.291 [2024-07-26 11:17:29.604121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.291 qpair failed and we were unable to recover it. 00:29:10.291 [2024-07-26 11:17:29.604645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.291 [2024-07-26 11:17:29.604678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.291 qpair failed and we were unable to recover it. 00:29:10.291 [2024-07-26 11:17:29.605295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.291 [2024-07-26 11:17:29.605328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.291 qpair failed and we were unable to recover it. 00:29:10.291 [2024-07-26 11:17:29.605854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.291 [2024-07-26 11:17:29.605886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.291 qpair failed and we were unable to recover it. 00:29:10.291 [2024-07-26 11:17:29.606347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.291 [2024-07-26 11:17:29.606381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.291 qpair failed and we were unable to recover it. 00:29:10.291 [2024-07-26 11:17:29.606985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.291 [2024-07-26 11:17:29.607017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.291 qpair failed and we were unable to recover it. 00:29:10.291 [2024-07-26 11:17:29.607592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.291 [2024-07-26 11:17:29.607625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.291 qpair failed and we were unable to recover it. 00:29:10.291 [2024-07-26 11:17:29.608212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.291 [2024-07-26 11:17:29.608230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.291 qpair failed and we were unable to recover it. 00:29:10.291 [2024-07-26 11:17:29.608705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.291 [2024-07-26 11:17:29.608737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.291 qpair failed and we were unable to recover it. 00:29:10.291 [2024-07-26 11:17:29.609256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.291 [2024-07-26 11:17:29.609291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.291 qpair failed and we were unable to recover it. 00:29:10.291 [2024-07-26 11:17:29.609843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.291 [2024-07-26 11:17:29.609876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.291 qpair failed and we were unable to recover it. 00:29:10.291 [2024-07-26 11:17:29.610386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.291 [2024-07-26 11:17:29.610419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.291 qpair failed and we were unable to recover it. 00:29:10.291 [2024-07-26 11:17:29.610958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.291 [2024-07-26 11:17:29.610996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.291 qpair failed and we were unable to recover it. 00:29:10.291 [2024-07-26 11:17:29.611505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.292 [2024-07-26 11:17:29.611522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.292 qpair failed and we were unable to recover it. 00:29:10.292 [2024-07-26 11:17:29.612144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.292 [2024-07-26 11:17:29.612177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.292 qpair failed and we were unable to recover it. 00:29:10.292 [2024-07-26 11:17:29.612630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.292 [2024-07-26 11:17:29.612662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.292 qpair failed and we were unable to recover it. 00:29:10.292 [2024-07-26 11:17:29.613239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.292 [2024-07-26 11:17:29.613272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.292 qpair failed and we were unable to recover it. 00:29:10.292 [2024-07-26 11:17:29.613795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.292 [2024-07-26 11:17:29.613829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.292 qpair failed and we were unable to recover it. 00:29:10.292 [2024-07-26 11:17:29.614402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.292 [2024-07-26 11:17:29.614436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.292 qpair failed and we were unable to recover it. 00:29:10.292 [2024-07-26 11:17:29.614938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.292 [2024-07-26 11:17:29.614969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.292 qpair failed and we were unable to recover it. 00:29:10.292 [2024-07-26 11:17:29.615472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.292 [2024-07-26 11:17:29.615490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.292 qpair failed and we were unable to recover it. 00:29:10.292 [2024-07-26 11:17:29.615965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.292 [2024-07-26 11:17:29.615999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.292 qpair failed and we were unable to recover it. 00:29:10.292 [2024-07-26 11:17:29.616565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.292 [2024-07-26 11:17:29.616599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.292 qpair failed and we were unable to recover it. 00:29:10.292 [2024-07-26 11:17:29.617238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.292 [2024-07-26 11:17:29.617272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.292 qpair failed and we were unable to recover it. 00:29:10.292 [2024-07-26 11:17:29.617887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.292 [2024-07-26 11:17:29.617919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.292 qpair failed and we were unable to recover it. 00:29:10.292 [2024-07-26 11:17:29.618441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.292 [2024-07-26 11:17:29.618474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.292 qpair failed and we were unable to recover it. 00:29:10.292 [2024-07-26 11:17:29.618929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.292 [2024-07-26 11:17:29.618963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.292 qpair failed and we were unable to recover it. 00:29:10.292 [2024-07-26 11:17:29.619543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.292 [2024-07-26 11:17:29.619577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.292 qpair failed and we were unable to recover it. 00:29:10.292 [2024-07-26 11:17:29.620014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.292 [2024-07-26 11:17:29.620055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.292 qpair failed and we were unable to recover it. 00:29:10.292 [2024-07-26 11:17:29.620548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.292 [2024-07-26 11:17:29.620581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.292 qpair failed and we were unable to recover it. 00:29:10.292 [2024-07-26 11:17:29.621174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.292 [2024-07-26 11:17:29.621207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.292 qpair failed and we were unable to recover it. 00:29:10.292 [2024-07-26 11:17:29.621726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.292 [2024-07-26 11:17:29.621758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.292 qpair failed and we were unable to recover it. 00:29:10.292 [2024-07-26 11:17:29.622206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.292 [2024-07-26 11:17:29.622239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.292 qpair failed and we were unable to recover it. 00:29:10.292 [2024-07-26 11:17:29.622744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.292 [2024-07-26 11:17:29.622776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.292 qpair failed and we were unable to recover it. 00:29:10.292 [2024-07-26 11:17:29.623304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.292 [2024-07-26 11:17:29.623338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.292 qpair failed and we were unable to recover it. 00:29:10.292 [2024-07-26 11:17:29.623797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.292 [2024-07-26 11:17:29.623829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.292 qpair failed and we were unable to recover it. 00:29:10.292 [2024-07-26 11:17:29.624407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.292 [2024-07-26 11:17:29.624440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.292 qpair failed and we were unable to recover it. 00:29:10.292 [2024-07-26 11:17:29.624896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.292 [2024-07-26 11:17:29.624928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.292 qpair failed and we were unable to recover it. 00:29:10.292 [2024-07-26 11:17:29.625486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.292 [2024-07-26 11:17:29.625519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.292 qpair failed and we were unable to recover it. 00:29:10.292 [2024-07-26 11:17:29.625968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.292 [2024-07-26 11:17:29.626001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.292 qpair failed and we were unable to recover it. 00:29:10.292 [2024-07-26 11:17:29.626518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.292 [2024-07-26 11:17:29.626551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.292 qpair failed and we were unable to recover it. 00:29:10.292 [2024-07-26 11:17:29.627219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.292 [2024-07-26 11:17:29.627252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.292 qpair failed and we were unable to recover it. 00:29:10.292 [2024-07-26 11:17:29.627697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.292 [2024-07-26 11:17:29.627730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.292 qpair failed and we were unable to recover it. 00:29:10.292 [2024-07-26 11:17:29.628231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.292 [2024-07-26 11:17:29.628265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.292 qpair failed and we were unable to recover it. 00:29:10.292 [2024-07-26 11:17:29.628787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.292 [2024-07-26 11:17:29.628820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.292 qpair failed and we were unable to recover it. 00:29:10.292 [2024-07-26 11:17:29.629397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.292 [2024-07-26 11:17:29.629431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.292 qpair failed and we were unable to recover it. 00:29:10.292 [2024-07-26 11:17:29.629952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.292 [2024-07-26 11:17:29.629984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.292 qpair failed and we were unable to recover it. 00:29:10.292 [2024-07-26 11:17:29.630561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.292 [2024-07-26 11:17:29.630594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.292 qpair failed and we were unable to recover it. 00:29:10.292 [2024-07-26 11:17:29.631109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.292 [2024-07-26 11:17:29.631142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.292 qpair failed and we were unable to recover it. 00:29:10.292 [2024-07-26 11:17:29.631687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.292 [2024-07-26 11:17:29.631718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.292 qpair failed and we were unable to recover it. 00:29:10.292 [2024-07-26 11:17:29.632324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.292 [2024-07-26 11:17:29.632357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.292 qpair failed and we were unable to recover it. 00:29:10.293 [2024-07-26 11:17:29.632887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.293 [2024-07-26 11:17:29.632919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.293 qpair failed and we were unable to recover it. 00:29:10.293 [2024-07-26 11:17:29.633478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.293 [2024-07-26 11:17:29.633522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.293 qpair failed and we were unable to recover it. 00:29:10.293 [2024-07-26 11:17:29.634163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.293 [2024-07-26 11:17:29.634196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.293 qpair failed and we were unable to recover it. 00:29:10.293 [2024-07-26 11:17:29.634719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.293 [2024-07-26 11:17:29.634751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.293 qpair failed and we were unable to recover it. 00:29:10.293 [2024-07-26 11:17:29.635322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.293 [2024-07-26 11:17:29.635354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.293 qpair failed and we were unable to recover it. 00:29:10.293 [2024-07-26 11:17:29.635884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.293 [2024-07-26 11:17:29.635915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.293 qpair failed and we were unable to recover it. 00:29:10.293 [2024-07-26 11:17:29.636463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.293 [2024-07-26 11:17:29.636497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.293 qpair failed and we were unable to recover it. 00:29:10.293 [2024-07-26 11:17:29.636939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.293 [2024-07-26 11:17:29.636972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.293 qpair failed and we were unable to recover it. 00:29:10.293 [2024-07-26 11:17:29.637535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.293 [2024-07-26 11:17:29.637568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.293 qpair failed and we were unable to recover it. 00:29:10.293 [2024-07-26 11:17:29.638207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.293 [2024-07-26 11:17:29.638239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.293 qpair failed and we were unable to recover it. 00:29:10.293 [2024-07-26 11:17:29.638744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.293 [2024-07-26 11:17:29.638776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.293 qpair failed and we were unable to recover it. 00:29:10.293 [2024-07-26 11:17:29.639359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.293 [2024-07-26 11:17:29.639392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.293 qpair failed and we were unable to recover it. 00:29:10.293 [2024-07-26 11:17:29.639957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.293 [2024-07-26 11:17:29.639989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.293 qpair failed and we were unable to recover it. 00:29:10.293 [2024-07-26 11:17:29.640568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.293 [2024-07-26 11:17:29.640601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.293 qpair failed and we were unable to recover it. 00:29:10.293 [2024-07-26 11:17:29.641203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.293 [2024-07-26 11:17:29.641236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.293 qpair failed and we were unable to recover it. 00:29:10.293 [2024-07-26 11:17:29.641697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.293 [2024-07-26 11:17:29.641729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.293 qpair failed and we were unable to recover it. 00:29:10.293 [2024-07-26 11:17:29.642295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.293 [2024-07-26 11:17:29.642312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.293 qpair failed and we were unable to recover it. 00:29:10.293 [2024-07-26 11:17:29.642815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.293 [2024-07-26 11:17:29.642847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.293 qpair failed and we were unable to recover it. 00:29:10.293 [2024-07-26 11:17:29.643422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.293 [2024-07-26 11:17:29.643455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.293 qpair failed and we were unable to recover it. 00:29:10.293 [2024-07-26 11:17:29.643911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.293 [2024-07-26 11:17:29.643944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.293 qpair failed and we were unable to recover it. 00:29:10.293 [2024-07-26 11:17:29.644476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.293 [2024-07-26 11:17:29.644508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.293 qpair failed and we were unable to recover it. 00:29:10.293 [2024-07-26 11:17:29.645060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.293 [2024-07-26 11:17:29.645093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.293 qpair failed and we were unable to recover it. 00:29:10.293 [2024-07-26 11:17:29.645648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.293 [2024-07-26 11:17:29.645681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.293 qpair failed and we were unable to recover it. 00:29:10.293 [2024-07-26 11:17:29.646228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.293 [2024-07-26 11:17:29.646262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.293 qpair failed and we were unable to recover it. 00:29:10.293 [2024-07-26 11:17:29.646734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.293 [2024-07-26 11:17:29.646766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.293 qpair failed and we were unable to recover it. 00:29:10.293 [2024-07-26 11:17:29.647440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.293 [2024-07-26 11:17:29.647473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.293 qpair failed and we were unable to recover it. 00:29:10.293 [2024-07-26 11:17:29.647938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.293 [2024-07-26 11:17:29.647955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.293 qpair failed and we were unable to recover it. 00:29:10.293 [2024-07-26 11:17:29.648502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.293 [2024-07-26 11:17:29.648536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.293 qpair failed and we were unable to recover it. 00:29:10.293 [2024-07-26 11:17:29.649121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.293 [2024-07-26 11:17:29.649155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.293 qpair failed and we were unable to recover it. 00:29:10.293 [2024-07-26 11:17:29.649673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.293 [2024-07-26 11:17:29.649706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.293 qpair failed and we were unable to recover it. 00:29:10.293 [2024-07-26 11:17:29.650304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.293 [2024-07-26 11:17:29.650337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.293 qpair failed and we were unable to recover it. 00:29:10.293 [2024-07-26 11:17:29.650877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.293 [2024-07-26 11:17:29.650910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.293 qpair failed and we were unable to recover it. 00:29:10.293 [2024-07-26 11:17:29.651354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.293 [2024-07-26 11:17:29.651398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.293 qpair failed and we were unable to recover it. 00:29:10.293 [2024-07-26 11:17:29.651872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.293 [2024-07-26 11:17:29.651904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.293 qpair failed and we were unable to recover it. 00:29:10.293 [2024-07-26 11:17:29.652522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.293 [2024-07-26 11:17:29.652555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.293 qpair failed and we were unable to recover it. 00:29:10.293 [2024-07-26 11:17:29.653016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.293 [2024-07-26 11:17:29.653058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.293 qpair failed and we were unable to recover it. 00:29:10.293 [2024-07-26 11:17:29.653608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.293 [2024-07-26 11:17:29.653640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.293 qpair failed and we were unable to recover it. 00:29:10.293 [2024-07-26 11:17:29.654256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.294 [2024-07-26 11:17:29.654274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.294 qpair failed and we were unable to recover it. 00:29:10.294 [2024-07-26 11:17:29.654754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.294 [2024-07-26 11:17:29.654787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.294 qpair failed and we were unable to recover it. 00:29:10.294 [2024-07-26 11:17:29.655362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.294 [2024-07-26 11:17:29.655396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.294 qpair failed and we were unable to recover it. 00:29:10.294 [2024-07-26 11:17:29.655935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.294 [2024-07-26 11:17:29.655968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.294 qpair failed and we were unable to recover it. 00:29:10.294 [2024-07-26 11:17:29.656486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.294 [2024-07-26 11:17:29.656524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.294 qpair failed and we were unable to recover it. 00:29:10.294 [2024-07-26 11:17:29.657156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.294 [2024-07-26 11:17:29.657188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.294 qpair failed and we were unable to recover it. 00:29:10.294 [2024-07-26 11:17:29.657659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.294 [2024-07-26 11:17:29.657691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.294 qpair failed and we were unable to recover it. 00:29:10.294 [2024-07-26 11:17:29.658291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.294 [2024-07-26 11:17:29.658324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.294 qpair failed and we were unable to recover it. 00:29:10.294 [2024-07-26 11:17:29.658826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.294 [2024-07-26 11:17:29.658858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.294 qpair failed and we were unable to recover it. 00:29:10.294 [2024-07-26 11:17:29.659364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.294 [2024-07-26 11:17:29.659381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.294 qpair failed and we were unable to recover it. 00:29:10.294 [2024-07-26 11:17:29.659804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.294 [2024-07-26 11:17:29.659836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.294 qpair failed and we were unable to recover it. 00:29:10.294 [2024-07-26 11:17:29.660354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.294 [2024-07-26 11:17:29.660387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.294 qpair failed and we were unable to recover it. 00:29:10.294 [2024-07-26 11:17:29.661058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.294 [2024-07-26 11:17:29.661091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.294 qpair failed and we were unable to recover it. 00:29:10.294 [2024-07-26 11:17:29.661618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.294 [2024-07-26 11:17:29.661650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.294 qpair failed and we were unable to recover it. 00:29:10.294 [2024-07-26 11:17:29.662156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.294 [2024-07-26 11:17:29.662189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.294 qpair failed and we were unable to recover it. 00:29:10.294 [2024-07-26 11:17:29.662742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.294 [2024-07-26 11:17:29.662774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.294 qpair failed and we were unable to recover it. 00:29:10.294 [2024-07-26 11:17:29.663357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.294 [2024-07-26 11:17:29.663391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.294 qpair failed and we were unable to recover it. 00:29:10.294 [2024-07-26 11:17:29.663861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.294 [2024-07-26 11:17:29.663894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.294 qpair failed and we were unable to recover it. 00:29:10.294 [2024-07-26 11:17:29.664410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.294 [2024-07-26 11:17:29.664443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.294 qpair failed and we were unable to recover it. 00:29:10.294 [2024-07-26 11:17:29.665004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.294 [2024-07-26 11:17:29.665036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.294 qpair failed and we were unable to recover it. 00:29:10.294 [2024-07-26 11:17:29.665495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.294 [2024-07-26 11:17:29.665528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.294 qpair failed and we were unable to recover it. 00:29:10.294 [2024-07-26 11:17:29.666057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.294 [2024-07-26 11:17:29.666090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.294 qpair failed and we were unable to recover it. 00:29:10.294 [2024-07-26 11:17:29.666541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.294 [2024-07-26 11:17:29.666573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.294 qpair failed and we were unable to recover it. 00:29:10.294 [2024-07-26 11:17:29.667153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.294 [2024-07-26 11:17:29.667187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.294 qpair failed and we were unable to recover it. 00:29:10.294 [2024-07-26 11:17:29.667715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.294 [2024-07-26 11:17:29.667748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.294 qpair failed and we were unable to recover it. 00:29:10.294 [2024-07-26 11:17:29.668325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.294 [2024-07-26 11:17:29.668357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.294 qpair failed and we were unable to recover it. 00:29:10.294 [2024-07-26 11:17:29.668875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.294 [2024-07-26 11:17:29.668907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.294 qpair failed and we were unable to recover it. 00:29:10.294 [2024-07-26 11:17:29.669439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.294 [2024-07-26 11:17:29.669471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.294 qpair failed and we were unable to recover it. 00:29:10.294 [2024-07-26 11:17:29.670064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.294 [2024-07-26 11:17:29.670097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.294 qpair failed and we were unable to recover it. 00:29:10.294 [2024-07-26 11:17:29.670565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.294 [2024-07-26 11:17:29.670599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.294 qpair failed and we were unable to recover it. 00:29:10.294 [2024-07-26 11:17:29.671204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.294 [2024-07-26 11:17:29.671238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.294 qpair failed and we were unable to recover it. 00:29:10.294 [2024-07-26 11:17:29.671863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.294 [2024-07-26 11:17:29.671897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.294 qpair failed and we were unable to recover it. 00:29:10.294 [2024-07-26 11:17:29.672468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.294 [2024-07-26 11:17:29.672501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.294 qpair failed and we were unable to recover it. 00:29:10.294 [2024-07-26 11:17:29.673017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.294 [2024-07-26 11:17:29.673059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.294 qpair failed and we were unable to recover it. 00:29:10.294 [2024-07-26 11:17:29.673594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.294 [2024-07-26 11:17:29.673626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.294 qpair failed and we were unable to recover it. 00:29:10.294 [2024-07-26 11:17:29.674151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.294 [2024-07-26 11:17:29.674185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.294 qpair failed and we were unable to recover it. 00:29:10.294 [2024-07-26 11:17:29.674769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.294 [2024-07-26 11:17:29.674801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.295 qpair failed and we were unable to recover it. 00:29:10.295 [2024-07-26 11:17:29.675322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.295 [2024-07-26 11:17:29.675356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.295 qpair failed and we were unable to recover it. 00:29:10.295 [2024-07-26 11:17:29.675829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.295 [2024-07-26 11:17:29.675861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.295 qpair failed and we were unable to recover it. 00:29:10.295 [2024-07-26 11:17:29.676418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.295 [2024-07-26 11:17:29.676451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.295 qpair failed and we were unable to recover it. 00:29:10.295 [2024-07-26 11:17:29.676993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.295 [2024-07-26 11:17:29.677025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.295 qpair failed and we were unable to recover it. 00:29:10.295 [2024-07-26 11:17:29.677509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.295 [2024-07-26 11:17:29.677541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.295 qpair failed and we were unable to recover it. 00:29:10.295 [2024-07-26 11:17:29.678109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.295 [2024-07-26 11:17:29.678143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.295 qpair failed and we were unable to recover it. 00:29:10.295 [2024-07-26 11:17:29.678656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.295 [2024-07-26 11:17:29.678688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.295 qpair failed and we were unable to recover it. 00:29:10.295 [2024-07-26 11:17:29.679191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.295 [2024-07-26 11:17:29.679230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.295 qpair failed and we were unable to recover it. 00:29:10.295 [2024-07-26 11:17:29.679727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.295 [2024-07-26 11:17:29.679760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.295 qpair failed and we were unable to recover it. 00:29:10.295 [2024-07-26 11:17:29.680266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.295 [2024-07-26 11:17:29.680299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.295 qpair failed and we were unable to recover it. 00:29:10.295 [2024-07-26 11:17:29.680803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.295 [2024-07-26 11:17:29.680835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.295 qpair failed and we were unable to recover it. 00:29:10.295 [2024-07-26 11:17:29.681330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.295 [2024-07-26 11:17:29.681364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.295 qpair failed and we were unable to recover it. 00:29:10.295 [2024-07-26 11:17:29.681900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.295 [2024-07-26 11:17:29.681933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.295 qpair failed and we were unable to recover it. 00:29:10.295 [2024-07-26 11:17:29.682493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.295 [2024-07-26 11:17:29.682526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.295 qpair failed and we were unable to recover it. 00:29:10.295 [2024-07-26 11:17:29.683032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.295 [2024-07-26 11:17:29.683078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.295 qpair failed and we were unable to recover it. 00:29:10.295 [2024-07-26 11:17:29.683588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.295 [2024-07-26 11:17:29.683620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.295 qpair failed and we were unable to recover it. 00:29:10.295 [2024-07-26 11:17:29.684244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.295 [2024-07-26 11:17:29.684278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.295 qpair failed and we were unable to recover it. 00:29:10.295 [2024-07-26 11:17:29.684775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.295 [2024-07-26 11:17:29.684809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.295 qpair failed and we were unable to recover it. 00:29:10.295 [2024-07-26 11:17:29.685376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.295 [2024-07-26 11:17:29.685410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.295 qpair failed and we were unable to recover it. 00:29:10.295 [2024-07-26 11:17:29.685921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.295 [2024-07-26 11:17:29.685953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.295 qpair failed and we were unable to recover it. 00:29:10.295 [2024-07-26 11:17:29.686458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.295 [2024-07-26 11:17:29.686491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.295 qpair failed and we were unable to recover it. 00:29:10.295 [2024-07-26 11:17:29.687082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.295 [2024-07-26 11:17:29.687117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.295 qpair failed and we were unable to recover it. 00:29:10.295 [2024-07-26 11:17:29.687864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.295 [2024-07-26 11:17:29.687899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.295 qpair failed and we were unable to recover it. 00:29:10.295 [2024-07-26 11:17:29.688417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.295 [2024-07-26 11:17:29.688436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.295 qpair failed and we were unable to recover it. 00:29:10.295 [2024-07-26 11:17:29.688903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.295 [2024-07-26 11:17:29.688935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.295 qpair failed and we were unable to recover it. 00:29:10.295 [2024-07-26 11:17:29.689482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.295 [2024-07-26 11:17:29.689515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.295 qpair failed and we were unable to recover it. 00:29:10.295 [2024-07-26 11:17:29.690057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.295 [2024-07-26 11:17:29.690090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.295 qpair failed and we were unable to recover it. 00:29:10.295 [2024-07-26 11:17:29.690603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.295 [2024-07-26 11:17:29.690636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.295 qpair failed and we were unable to recover it. 00:29:10.295 [2024-07-26 11:17:29.691182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.295 [2024-07-26 11:17:29.691215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.295 qpair failed and we were unable to recover it. 00:29:10.295 [2024-07-26 11:17:29.691760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.295 [2024-07-26 11:17:29.691792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.295 qpair failed and we were unable to recover it. 00:29:10.295 [2024-07-26 11:17:29.692416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.295 [2024-07-26 11:17:29.692449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.295 qpair failed and we were unable to recover it. 00:29:10.295 [2024-07-26 11:17:29.692911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.295 [2024-07-26 11:17:29.692940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.295 qpair failed and we were unable to recover it. 00:29:10.295 [2024-07-26 11:17:29.693397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.295 [2024-07-26 11:17:29.693430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.296 qpair failed and we were unable to recover it. 00:29:10.296 [2024-07-26 11:17:29.693933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.296 [2024-07-26 11:17:29.693965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.296 qpair failed and we were unable to recover it. 00:29:10.296 [2024-07-26 11:17:29.694532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.296 [2024-07-26 11:17:29.694565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.296 qpair failed and we were unable to recover it. 00:29:10.296 [2024-07-26 11:17:29.695183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.296 [2024-07-26 11:17:29.695216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.296 qpair failed and we were unable to recover it. 00:29:10.296 [2024-07-26 11:17:29.695756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.296 [2024-07-26 11:17:29.695789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.296 qpair failed and we were unable to recover it. 00:29:10.296 [2024-07-26 11:17:29.696352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.296 [2024-07-26 11:17:29.696387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.296 qpair failed and we were unable to recover it. 00:29:10.296 [2024-07-26 11:17:29.696958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.296 [2024-07-26 11:17:29.696990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.296 qpair failed and we were unable to recover it. 00:29:10.296 [2024-07-26 11:17:29.697564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.296 [2024-07-26 11:17:29.697598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.296 qpair failed and we were unable to recover it. 00:29:10.296 [2024-07-26 11:17:29.698111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.296 [2024-07-26 11:17:29.698145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.296 qpair failed and we were unable to recover it. 00:29:10.296 [2024-07-26 11:17:29.698655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.296 [2024-07-26 11:17:29.698687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.296 qpair failed and we were unable to recover it. 00:29:10.296 [2024-07-26 11:17:29.699220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.296 [2024-07-26 11:17:29.699253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.296 qpair failed and we were unable to recover it. 00:29:10.296 [2024-07-26 11:17:29.699826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.296 [2024-07-26 11:17:29.699859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.296 qpair failed and we were unable to recover it. 00:29:10.296 [2024-07-26 11:17:29.700315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.296 [2024-07-26 11:17:29.700348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.296 qpair failed and we were unable to recover it. 00:29:10.296 [2024-07-26 11:17:29.700920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.296 [2024-07-26 11:17:29.700952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.296 qpair failed and we were unable to recover it. 00:29:10.296 [2024-07-26 11:17:29.701461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.296 [2024-07-26 11:17:29.701494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.296 qpair failed and we were unable to recover it. 00:29:10.296 [2024-07-26 11:17:29.702104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.296 [2024-07-26 11:17:29.702143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.296 qpair failed and we were unable to recover it. 00:29:10.296 [2024-07-26 11:17:29.702597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.296 [2024-07-26 11:17:29.702629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.296 qpair failed and we were unable to recover it. 00:29:10.296 [2024-07-26 11:17:29.703222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.296 [2024-07-26 11:17:29.703256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.296 qpair failed and we were unable to recover it. 00:29:10.296 [2024-07-26 11:17:29.703763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.296 [2024-07-26 11:17:29.703795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.296 qpair failed and we were unable to recover it. 00:29:10.296 [2024-07-26 11:17:29.704304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.296 [2024-07-26 11:17:29.704337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.296 qpair failed and we were unable to recover it. 00:29:10.296 [2024-07-26 11:17:29.704839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.296 [2024-07-26 11:17:29.704872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.296 qpair failed and we were unable to recover it. 00:29:10.296 [2024-07-26 11:17:29.705471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.296 [2024-07-26 11:17:29.705503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.296 qpair failed and we were unable to recover it. 00:29:10.296 [2024-07-26 11:17:29.706098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.296 [2024-07-26 11:17:29.706132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.296 qpair failed and we were unable to recover it. 00:29:10.296 [2024-07-26 11:17:29.706706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.296 [2024-07-26 11:17:29.706738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.296 qpair failed and we were unable to recover it. 00:29:10.296 [2024-07-26 11:17:29.707239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.296 [2024-07-26 11:17:29.707271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.296 qpair failed and we were unable to recover it. 00:29:10.296 [2024-07-26 11:17:29.707757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.296 [2024-07-26 11:17:29.707789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.296 qpair failed and we were unable to recover it. 00:29:10.296 [2024-07-26 11:17:29.708327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.296 [2024-07-26 11:17:29.708360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.296 qpair failed and we were unable to recover it. 00:29:10.296 [2024-07-26 11:17:29.708860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.296 [2024-07-26 11:17:29.708892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.296 qpair failed and we were unable to recover it. 00:29:10.296 [2024-07-26 11:17:29.709415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.296 [2024-07-26 11:17:29.709448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.296 qpair failed and we were unable to recover it. 00:29:10.296 [2024-07-26 11:17:29.709899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.296 [2024-07-26 11:17:29.709931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.296 qpair failed and we were unable to recover it. 00:29:10.296 [2024-07-26 11:17:29.710491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.296 [2024-07-26 11:17:29.710525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.296 qpair failed and we were unable to recover it. 00:29:10.296 [2024-07-26 11:17:29.711116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.296 [2024-07-26 11:17:29.711149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.296 qpair failed and we were unable to recover it. 00:29:10.296 [2024-07-26 11:17:29.711699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.296 [2024-07-26 11:17:29.711731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.296 qpair failed and we were unable to recover it. 00:29:10.296 [2024-07-26 11:17:29.712245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.296 [2024-07-26 11:17:29.712278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.296 qpair failed and we were unable to recover it. 00:29:10.296 [2024-07-26 11:17:29.712844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.296 [2024-07-26 11:17:29.712876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.296 qpair failed and we were unable to recover it. 00:29:10.296 [2024-07-26 11:17:29.713409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.296 [2024-07-26 11:17:29.713443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.296 qpair failed and we were unable to recover it. 00:29:10.296 [2024-07-26 11:17:29.713980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.296 [2024-07-26 11:17:29.714013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.296 qpair failed and we were unable to recover it. 00:29:10.296 [2024-07-26 11:17:29.714592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.296 [2024-07-26 11:17:29.714625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.296 qpair failed and we were unable to recover it. 00:29:10.297 [2024-07-26 11:17:29.715112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.297 [2024-07-26 11:17:29.715146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.297 qpair failed and we were unable to recover it. 00:29:10.297 [2024-07-26 11:17:29.715648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.297 [2024-07-26 11:17:29.715679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.297 qpair failed and we were unable to recover it. 00:29:10.297 [2024-07-26 11:17:29.716178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.297 [2024-07-26 11:17:29.716211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.297 qpair failed and we were unable to recover it. 00:29:10.297 [2024-07-26 11:17:29.716721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.297 [2024-07-26 11:17:29.716753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.297 qpair failed and we were unable to recover it. 00:29:10.297 [2024-07-26 11:17:29.717249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.297 [2024-07-26 11:17:29.717283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.297 qpair failed and we were unable to recover it. 00:29:10.297 [2024-07-26 11:17:29.717857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.297 [2024-07-26 11:17:29.717888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.297 qpair failed and we were unable to recover it. 00:29:10.297 [2024-07-26 11:17:29.718385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.297 [2024-07-26 11:17:29.718403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.297 qpair failed and we were unable to recover it. 00:29:10.297 [2024-07-26 11:17:29.718878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.297 [2024-07-26 11:17:29.718910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.297 qpair failed and we were unable to recover it. 00:29:10.297 [2024-07-26 11:17:29.719550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.297 [2024-07-26 11:17:29.719582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.297 qpair failed and we were unable to recover it. 00:29:10.297 [2024-07-26 11:17:29.720197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.297 [2024-07-26 11:17:29.720231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.297 qpair failed and we were unable to recover it. 00:29:10.297 [2024-07-26 11:17:29.720692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.297 [2024-07-26 11:17:29.720724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.297 qpair failed and we were unable to recover it. 00:29:10.297 [2024-07-26 11:17:29.721281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.297 [2024-07-26 11:17:29.721315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.297 qpair failed and we were unable to recover it. 00:29:10.297 [2024-07-26 11:17:29.721836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.297 [2024-07-26 11:17:29.721869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.297 qpair failed and we were unable to recover it. 00:29:10.297 [2024-07-26 11:17:29.722383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.297 [2024-07-26 11:17:29.722416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.297 qpair failed and we were unable to recover it. 00:29:10.297 [2024-07-26 11:17:29.722918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.297 [2024-07-26 11:17:29.722950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.297 qpair failed and we were unable to recover it. 00:29:10.297 [2024-07-26 11:17:29.723526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.297 [2024-07-26 11:17:29.723560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.297 qpair failed and we were unable to recover it. 00:29:10.297 [2024-07-26 11:17:29.724153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.297 [2024-07-26 11:17:29.724187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.297 qpair failed and we were unable to recover it. 00:29:10.297 [2024-07-26 11:17:29.724719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.297 [2024-07-26 11:17:29.724757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.297 qpair failed and we were unable to recover it. 00:29:10.297 [2024-07-26 11:17:29.725354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.297 [2024-07-26 11:17:29.725387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.297 qpair failed and we were unable to recover it. 00:29:10.297 [2024-07-26 11:17:29.725904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.297 [2024-07-26 11:17:29.725935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.297 qpair failed and we were unable to recover it. 00:29:10.297 [2024-07-26 11:17:29.726373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.297 [2024-07-26 11:17:29.726406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.297 qpair failed and we were unable to recover it. 00:29:10.297 [2024-07-26 11:17:29.726917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.297 [2024-07-26 11:17:29.726949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.297 qpair failed and we were unable to recover it. 00:29:10.297 [2024-07-26 11:17:29.727478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.297 [2024-07-26 11:17:29.727512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.297 qpair failed and we were unable to recover it. 00:29:10.297 [2024-07-26 11:17:29.727981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.297 [2024-07-26 11:17:29.728013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.297 qpair failed and we were unable to recover it. 00:29:10.297 [2024-07-26 11:17:29.728610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.297 [2024-07-26 11:17:29.728643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.297 qpair failed and we were unable to recover it. 00:29:10.297 [2024-07-26 11:17:29.729176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.297 [2024-07-26 11:17:29.729210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.297 qpair failed and we were unable to recover it. 00:29:10.297 [2024-07-26 11:17:29.729661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.297 [2024-07-26 11:17:29.729693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.297 qpair failed and we were unable to recover it. 00:29:10.297 [2024-07-26 11:17:29.730311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.297 [2024-07-26 11:17:29.730344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.297 qpair failed and we were unable to recover it. 00:29:10.297 [2024-07-26 11:17:29.730951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.297 [2024-07-26 11:17:29.730982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.297 qpair failed and we were unable to recover it. 00:29:10.297 [2024-07-26 11:17:29.731495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.297 [2024-07-26 11:17:29.731528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.297 qpair failed and we were unable to recover it. 00:29:10.297 [2024-07-26 11:17:29.732138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.297 [2024-07-26 11:17:29.732171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.297 qpair failed and we were unable to recover it. 00:29:10.297 [2024-07-26 11:17:29.732753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.297 [2024-07-26 11:17:29.732786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.297 qpair failed and we were unable to recover it. 00:29:10.297 [2024-07-26 11:17:29.733414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.297 [2024-07-26 11:17:29.733447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.297 qpair failed and we were unable to recover it. 00:29:10.297 [2024-07-26 11:17:29.734061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.297 [2024-07-26 11:17:29.734095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.297 qpair failed and we were unable to recover it. 00:29:10.297 [2024-07-26 11:17:29.734662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.297 [2024-07-26 11:17:29.734695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.297 qpair failed and we were unable to recover it. 00:29:10.297 [2024-07-26 11:17:29.735224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.297 [2024-07-26 11:17:29.735257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.297 qpair failed and we were unable to recover it. 00:29:10.297 [2024-07-26 11:17:29.735775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.297 [2024-07-26 11:17:29.735807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.297 qpair failed and we were unable to recover it. 00:29:10.297 [2024-07-26 11:17:29.736237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.297 [2024-07-26 11:17:29.736270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.298 qpair failed and we were unable to recover it. 00:29:10.298 [2024-07-26 11:17:29.736724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.298 [2024-07-26 11:17:29.736757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.298 qpair failed and we were unable to recover it. 00:29:10.298 [2024-07-26 11:17:29.737315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.298 [2024-07-26 11:17:29.737348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.298 qpair failed and we were unable to recover it. 00:29:10.298 [2024-07-26 11:17:29.737857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.298 [2024-07-26 11:17:29.737889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.298 qpair failed and we were unable to recover it. 00:29:10.298 [2024-07-26 11:17:29.738345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.298 [2024-07-26 11:17:29.738378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.298 qpair failed and we were unable to recover it. 00:29:10.298 [2024-07-26 11:17:29.738950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.298 [2024-07-26 11:17:29.738982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.298 qpair failed and we were unable to recover it. 00:29:10.298 [2024-07-26 11:17:29.739452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.298 [2024-07-26 11:17:29.739485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.298 qpair failed and we were unable to recover it. 00:29:10.298 [2024-07-26 11:17:29.740120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.298 [2024-07-26 11:17:29.740154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.298 qpair failed and we were unable to recover it. 00:29:10.298 [2024-07-26 11:17:29.740683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.298 [2024-07-26 11:17:29.740716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.298 qpair failed and we were unable to recover it. 00:29:10.298 [2024-07-26 11:17:29.741236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.298 [2024-07-26 11:17:29.741270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.298 qpair failed and we were unable to recover it. 00:29:10.298 [2024-07-26 11:17:29.741777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.298 [2024-07-26 11:17:29.741810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.298 qpair failed and we were unable to recover it. 00:29:10.298 [2024-07-26 11:17:29.742405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.298 [2024-07-26 11:17:29.742438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.298 qpair failed and we were unable to recover it. 00:29:10.298 [2024-07-26 11:17:29.743075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.298 [2024-07-26 11:17:29.743108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.298 qpair failed and we were unable to recover it. 00:29:10.298 [2024-07-26 11:17:29.743607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.298 [2024-07-26 11:17:29.743639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.298 qpair failed and we were unable to recover it. 00:29:10.298 [2024-07-26 11:17:29.744169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.298 [2024-07-26 11:17:29.744203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.298 qpair failed and we were unable to recover it. 00:29:10.298 [2024-07-26 11:17:29.744782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.298 [2024-07-26 11:17:29.744815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.298 qpair failed and we were unable to recover it. 00:29:10.298 [2024-07-26 11:17:29.745343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.298 [2024-07-26 11:17:29.745375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.298 qpair failed and we were unable to recover it. 00:29:10.298 [2024-07-26 11:17:29.745905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.298 [2024-07-26 11:17:29.745937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.298 qpair failed and we were unable to recover it. 00:29:10.298 [2024-07-26 11:17:29.746469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.298 [2024-07-26 11:17:29.746502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.298 qpair failed and we were unable to recover it. 00:29:10.298 [2024-07-26 11:17:29.747074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.298 [2024-07-26 11:17:29.747108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.298 qpair failed and we were unable to recover it. 00:29:10.298 [2024-07-26 11:17:29.747630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.298 [2024-07-26 11:17:29.747652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.298 qpair failed and we were unable to recover it. 00:29:10.298 [2024-07-26 11:17:29.748119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.298 [2024-07-26 11:17:29.748136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.298 qpair failed and we were unable to recover it. 00:29:10.298 [2024-07-26 11:17:29.748622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.298 [2024-07-26 11:17:29.748653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.298 qpair failed and we were unable to recover it. 00:29:10.298 [2024-07-26 11:17:29.749176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.298 [2024-07-26 11:17:29.749193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.298 qpair failed and we were unable to recover it. 00:29:10.298 [2024-07-26 11:17:29.749679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.298 [2024-07-26 11:17:29.749711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.298 qpair failed and we were unable to recover it. 00:29:10.298 [2024-07-26 11:17:29.750337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.298 [2024-07-26 11:17:29.750370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.298 qpair failed and we were unable to recover it. 00:29:10.298 [2024-07-26 11:17:29.750994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.298 [2024-07-26 11:17:29.751026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.298 qpair failed and we were unable to recover it. 00:29:10.298 [2024-07-26 11:17:29.751631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.298 [2024-07-26 11:17:29.751663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.298 qpair failed and we were unable to recover it. 00:29:10.298 [2024-07-26 11:17:29.752289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.298 [2024-07-26 11:17:29.752322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.298 qpair failed and we were unable to recover it. 00:29:10.298 [2024-07-26 11:17:29.752884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.298 [2024-07-26 11:17:29.752916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.298 qpair failed and we were unable to recover it. 00:29:10.298 [2024-07-26 11:17:29.753422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.298 [2024-07-26 11:17:29.753456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.298 qpair failed and we were unable to recover it. 00:29:10.298 [2024-07-26 11:17:29.754015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.298 [2024-07-26 11:17:29.754057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.298 qpair failed and we were unable to recover it. 00:29:10.298 [2024-07-26 11:17:29.754534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.298 [2024-07-26 11:17:29.754566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.298 qpair failed and we were unable to recover it. 00:29:10.298 [2024-07-26 11:17:29.755124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.298 [2024-07-26 11:17:29.755158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.298 qpair failed and we were unable to recover it. 00:29:10.298 [2024-07-26 11:17:29.755618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.298 [2024-07-26 11:17:29.755650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.298 qpair failed and we were unable to recover it. 00:29:10.298 [2024-07-26 11:17:29.756172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.298 [2024-07-26 11:17:29.756207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.298 qpair failed and we were unable to recover it. 00:29:10.298 [2024-07-26 11:17:29.756713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.298 [2024-07-26 11:17:29.756745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.298 qpair failed and we were unable to recover it. 00:29:10.298 [2024-07-26 11:17:29.757370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.298 [2024-07-26 11:17:29.757403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.298 qpair failed and we were unable to recover it. 00:29:10.299 [2024-07-26 11:17:29.758922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.299 [2024-07-26 11:17:29.758956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.299 qpair failed and we were unable to recover it. 00:29:10.299 [2024-07-26 11:17:29.759536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.299 [2024-07-26 11:17:29.759555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.299 qpair failed and we were unable to recover it. 00:29:10.299 [2024-07-26 11:17:29.760152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.299 [2024-07-26 11:17:29.760170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.299 qpair failed and we were unable to recover it. 00:29:10.299 [2024-07-26 11:17:29.760945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.299 [2024-07-26 11:17:29.760972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.299 qpair failed and we were unable to recover it. 00:29:10.299 [2024-07-26 11:17:29.761483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.299 [2024-07-26 11:17:29.761503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.299 qpair failed and we were unable to recover it. 00:29:10.299 [2024-07-26 11:17:29.761997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.299 [2024-07-26 11:17:29.762014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.299 qpair failed and we were unable to recover it. 00:29:10.299 [2024-07-26 11:17:29.762628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.299 [2024-07-26 11:17:29.762657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.299 qpair failed and we were unable to recover it. 00:29:10.299 [2024-07-26 11:17:29.763281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.299 [2024-07-26 11:17:29.763299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.299 qpair failed and we were unable to recover it. 00:29:10.299 [2024-07-26 11:17:29.763797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.299 [2024-07-26 11:17:29.763815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.299 qpair failed and we were unable to recover it. 00:29:10.299 [2024-07-26 11:17:29.764735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.299 [2024-07-26 11:17:29.764763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.299 qpair failed and we were unable to recover it. 00:29:10.299 [2024-07-26 11:17:29.765212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.299 [2024-07-26 11:17:29.765231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.299 qpair failed and we were unable to recover it. 00:29:10.299 [2024-07-26 11:17:29.765751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.299 [2024-07-26 11:17:29.765768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.299 qpair failed and we were unable to recover it. 00:29:10.299 [2024-07-26 11:17:29.766374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.299 [2024-07-26 11:17:29.766401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.299 qpair failed and we were unable to recover it. 00:29:10.299 [2024-07-26 11:17:29.766837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.299 [2024-07-26 11:17:29.766855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.299 qpair failed and we were unable to recover it. 00:29:10.299 [2024-07-26 11:17:29.767325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.299 [2024-07-26 11:17:29.767343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.299 qpair failed and we were unable to recover it. 00:29:10.299 [2024-07-26 11:17:29.768077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.299 [2024-07-26 11:17:29.768103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.299 qpair failed and we were unable to recover it. 00:29:10.299 [2024-07-26 11:17:29.768530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.299 [2024-07-26 11:17:29.768548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.299 qpair failed and we were unable to recover it. 00:29:10.299 [2024-07-26 11:17:29.769125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.299 [2024-07-26 11:17:29.769143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.299 qpair failed and we were unable to recover it. 00:29:10.299 [2024-07-26 11:17:29.769633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.299 [2024-07-26 11:17:29.769651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.299 qpair failed and we were unable to recover it. 00:29:10.299 [2024-07-26 11:17:29.770122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.299 [2024-07-26 11:17:29.770139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.299 qpair failed and we were unable to recover it. 00:29:10.299 [2024-07-26 11:17:29.770631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.299 [2024-07-26 11:17:29.770648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.299 qpair failed and we were unable to recover it. 00:29:10.299 [2024-07-26 11:17:29.771126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.299 [2024-07-26 11:17:29.771144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.299 qpair failed and we were unable to recover it. 00:29:10.299 [2024-07-26 11:17:29.771606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.299 [2024-07-26 11:17:29.771627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.299 qpair failed and we were unable to recover it. 00:29:10.299 [2024-07-26 11:17:29.772057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.299 [2024-07-26 11:17:29.772075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.299 qpair failed and we were unable to recover it. 00:29:10.299 [2024-07-26 11:17:29.772411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.299 [2024-07-26 11:17:29.772427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.299 qpair failed and we were unable to recover it. 00:29:10.567 [2024-07-26 11:17:29.773014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.567 [2024-07-26 11:17:29.773035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.567 qpair failed and we were unable to recover it. 00:29:10.567 [2024-07-26 11:17:29.773539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.567 [2024-07-26 11:17:29.773556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.567 qpair failed and we were unable to recover it. 00:29:10.567 [2024-07-26 11:17:29.773958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.567 [2024-07-26 11:17:29.773975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.567 qpair failed and we were unable to recover it. 00:29:10.567 [2024-07-26 11:17:29.774494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.567 [2024-07-26 11:17:29.774512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.567 qpair failed and we were unable to recover it. 00:29:10.567 [2024-07-26 11:17:29.774996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.567 [2024-07-26 11:17:29.775013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.567 qpair failed and we were unable to recover it. 00:29:10.567 [2024-07-26 11:17:29.775576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.567 [2024-07-26 11:17:29.775593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.567 qpair failed and we were unable to recover it. 00:29:10.567 [2024-07-26 11:17:29.776111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.567 [2024-07-26 11:17:29.776129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.567 qpair failed and we were unable to recover it. 00:29:10.567 [2024-07-26 11:17:29.776591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.567 [2024-07-26 11:17:29.776607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.567 qpair failed and we were unable to recover it. 00:29:10.567 [2024-07-26 11:17:29.777181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.567 [2024-07-26 11:17:29.777198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.567 qpair failed and we were unable to recover it. 00:29:10.568 [2024-07-26 11:17:29.777777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-07-26 11:17:29.777793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.568 qpair failed and we were unable to recover it. 00:29:10.568 [2024-07-26 11:17:29.778203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-07-26 11:17:29.778221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.568 qpair failed and we were unable to recover it. 00:29:10.568 [2024-07-26 11:17:29.778692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-07-26 11:17:29.778709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.568 qpair failed and we were unable to recover it. 00:29:10.568 [2024-07-26 11:17:29.779280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-07-26 11:17:29.779298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.568 qpair failed and we were unable to recover it. 00:29:10.568 [2024-07-26 11:17:29.779774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-07-26 11:17:29.779791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.568 qpair failed and we were unable to recover it. 00:29:10.568 [2024-07-26 11:17:29.780252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-07-26 11:17:29.780268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.568 qpair failed and we were unable to recover it. 00:29:10.568 [2024-07-26 11:17:29.780800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-07-26 11:17:29.780816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.568 qpair failed and we were unable to recover it. 00:29:10.568 [2024-07-26 11:17:29.781379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-07-26 11:17:29.781396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.568 qpair failed and we were unable to recover it. 00:29:10.568 [2024-07-26 11:17:29.781980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-07-26 11:17:29.781996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.568 qpair failed and we were unable to recover it. 00:29:10.568 [2024-07-26 11:17:29.782560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-07-26 11:17:29.782577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.568 qpair failed and we were unable to recover it. 00:29:10.568 [2024-07-26 11:17:29.783060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-07-26 11:17:29.783078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.568 qpair failed and we were unable to recover it. 00:29:10.568 [2024-07-26 11:17:29.783596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-07-26 11:17:29.783612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.568 qpair failed and we were unable to recover it. 00:29:10.568 [2024-07-26 11:17:29.784166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-07-26 11:17:29.784183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.568 qpair failed and we were unable to recover it. 00:29:10.568 [2024-07-26 11:17:29.784631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-07-26 11:17:29.784648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.568 qpair failed and we were unable to recover it. 00:29:10.568 [2024-07-26 11:17:29.785193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-07-26 11:17:29.785209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.568 qpair failed and we were unable to recover it. 00:29:10.568 [2024-07-26 11:17:29.785715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-07-26 11:17:29.785732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.568 qpair failed and we were unable to recover it. 00:29:10.568 [2024-07-26 11:17:29.786225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-07-26 11:17:29.786242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.568 qpair failed and we were unable to recover it. 00:29:10.568 [2024-07-26 11:17:29.786766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-07-26 11:17:29.786783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.568 qpair failed and we were unable to recover it. 00:29:10.568 [2024-07-26 11:17:29.787330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-07-26 11:17:29.787348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.568 qpair failed and we were unable to recover it. 00:29:10.568 [2024-07-26 11:17:29.787836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-07-26 11:17:29.787853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.568 qpair failed and we were unable to recover it. 00:29:10.568 [2024-07-26 11:17:29.788403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-07-26 11:17:29.788422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.568 qpair failed and we were unable to recover it. 00:29:10.568 [2024-07-26 11:17:29.788987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-07-26 11:17:29.789004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.568 qpair failed and we were unable to recover it. 00:29:10.568 [2024-07-26 11:17:29.789589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-07-26 11:17:29.789605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.568 qpair failed and we were unable to recover it. 00:29:10.568 [2024-07-26 11:17:29.790122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-07-26 11:17:29.790139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.568 qpair failed and we were unable to recover it. 00:29:10.568 [2024-07-26 11:17:29.790542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-07-26 11:17:29.790557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.568 qpair failed and we were unable to recover it. 00:29:10.568 [2024-07-26 11:17:29.791020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-07-26 11:17:29.791037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.568 qpair failed and we were unable to recover it. 00:29:10.568 [2024-07-26 11:17:29.791594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-07-26 11:17:29.791611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.568 qpair failed and we were unable to recover it. 00:29:10.568 [2024-07-26 11:17:29.792088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-07-26 11:17:29.792105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.568 qpair failed and we were unable to recover it. 00:29:10.568 [2024-07-26 11:17:29.792606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-07-26 11:17:29.792623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.568 qpair failed and we were unable to recover it. 00:29:10.568 [2024-07-26 11:17:29.793186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-07-26 11:17:29.793203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.568 qpair failed and we were unable to recover it. 00:29:10.568 [2024-07-26 11:17:29.793720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-07-26 11:17:29.793736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.568 qpair failed and we were unable to recover it. 00:29:10.568 [2024-07-26 11:17:29.794205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-07-26 11:17:29.794222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.568 qpair failed and we were unable to recover it. 00:29:10.568 [2024-07-26 11:17:29.794695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-07-26 11:17:29.794711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.568 qpair failed and we were unable to recover it. 00:29:10.568 [2024-07-26 11:17:29.795283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-07-26 11:17:29.795300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.568 qpair failed and we were unable to recover it. 00:29:10.568 [2024-07-26 11:17:29.795867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.568 [2024-07-26 11:17:29.795884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.568 qpair failed and we were unable to recover it. 00:29:10.568 [2024-07-26 11:17:29.796472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.569 [2024-07-26 11:17:29.796489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.569 qpair failed and we were unable to recover it. 00:29:10.569 [2024-07-26 11:17:29.796972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.569 [2024-07-26 11:17:29.796989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.569 qpair failed and we were unable to recover it. 00:29:10.569 [2024-07-26 11:17:29.797476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.569 [2024-07-26 11:17:29.797493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.569 qpair failed and we were unable to recover it. 00:29:10.569 [2024-07-26 11:17:29.798065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.569 [2024-07-26 11:17:29.798082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.569 qpair failed and we were unable to recover it. 00:29:10.569 [2024-07-26 11:17:29.798592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.569 [2024-07-26 11:17:29.798608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.569 qpair failed and we were unable to recover it. 00:29:10.569 [2024-07-26 11:17:29.799175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.569 [2024-07-26 11:17:29.799192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.569 qpair failed and we were unable to recover it. 00:29:10.569 [2024-07-26 11:17:29.799693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.569 [2024-07-26 11:17:29.799709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.569 qpair failed and we were unable to recover it. 00:29:10.569 [2024-07-26 11:17:29.800287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.569 [2024-07-26 11:17:29.800304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.569 qpair failed and we were unable to recover it. 00:29:10.569 [2024-07-26 11:17:29.800791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.569 [2024-07-26 11:17:29.800807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.569 qpair failed and we were unable to recover it. 00:29:10.569 [2024-07-26 11:17:29.801266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.569 [2024-07-26 11:17:29.801282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.569 qpair failed and we were unable to recover it. 00:29:10.569 [2024-07-26 11:17:29.801843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.569 [2024-07-26 11:17:29.801860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.569 qpair failed and we were unable to recover it. 00:29:10.569 [2024-07-26 11:17:29.802420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.569 [2024-07-26 11:17:29.802437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.569 qpair failed and we were unable to recover it. 00:29:10.569 [2024-07-26 11:17:29.802947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.569 [2024-07-26 11:17:29.802962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.569 qpair failed and we were unable to recover it. 00:29:10.569 [2024-07-26 11:17:29.803527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.569 [2024-07-26 11:17:29.803543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.569 qpair failed and we were unable to recover it. 00:29:10.569 [2024-07-26 11:17:29.804022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.569 [2024-07-26 11:17:29.804038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.569 qpair failed and we were unable to recover it. 00:29:10.569 [2024-07-26 11:17:29.804504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.569 [2024-07-26 11:17:29.804521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.569 qpair failed and we were unable to recover it. 00:29:10.569 [2024-07-26 11:17:29.804987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.569 [2024-07-26 11:17:29.805002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.569 qpair failed and we were unable to recover it. 00:29:10.569 [2024-07-26 11:17:29.805529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.569 [2024-07-26 11:17:29.805546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.569 qpair failed and we were unable to recover it. 00:29:10.569 [2024-07-26 11:17:29.806070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.569 [2024-07-26 11:17:29.806087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.569 qpair failed and we were unable to recover it. 00:29:10.569 [2024-07-26 11:17:29.806604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.569 [2024-07-26 11:17:29.806619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.569 qpair failed and we were unable to recover it. 00:29:10.569 [2024-07-26 11:17:29.807154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.569 [2024-07-26 11:17:29.807179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.569 qpair failed and we were unable to recover it. 00:29:10.569 [2024-07-26 11:17:29.807771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.569 [2024-07-26 11:17:29.807787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.569 qpair failed and we were unable to recover it. 00:29:10.569 [2024-07-26 11:17:29.808348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.569 [2024-07-26 11:17:29.808365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.569 qpair failed and we were unable to recover it. 00:29:10.569 [2024-07-26 11:17:29.808927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.569 [2024-07-26 11:17:29.808944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.569 qpair failed and we were unable to recover it. 00:29:10.569 [2024-07-26 11:17:29.809497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.569 [2024-07-26 11:17:29.809514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.569 qpair failed and we were unable to recover it. 00:29:10.569 [2024-07-26 11:17:29.810085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.569 [2024-07-26 11:17:29.810102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.569 qpair failed and we were unable to recover it. 00:29:10.569 [2024-07-26 11:17:29.810660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.569 [2024-07-26 11:17:29.810676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.569 qpair failed and we were unable to recover it. 00:29:10.569 [2024-07-26 11:17:29.811223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.569 [2024-07-26 11:17:29.811240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.569 qpair failed and we were unable to recover it. 00:29:10.569 [2024-07-26 11:17:29.811821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.569 [2024-07-26 11:17:29.811838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.569 qpair failed and we were unable to recover it. 00:29:10.569 [2024-07-26 11:17:29.812392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.569 [2024-07-26 11:17:29.812409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.569 qpair failed and we were unable to recover it. 00:29:10.569 [2024-07-26 11:17:29.812977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.569 [2024-07-26 11:17:29.812993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.569 qpair failed and we were unable to recover it. 00:29:10.569 [2024-07-26 11:17:29.813555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.569 [2024-07-26 11:17:29.813572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.569 qpair failed and we were unable to recover it. 00:29:10.569 [2024-07-26 11:17:29.814057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.569 [2024-07-26 11:17:29.814074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.569 qpair failed and we were unable to recover it. 00:29:10.569 [2024-07-26 11:17:29.814542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.569 [2024-07-26 11:17:29.814558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.569 qpair failed and we were unable to recover it. 00:29:10.569 [2024-07-26 11:17:29.815031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.569 [2024-07-26 11:17:29.815057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.569 qpair failed and we were unable to recover it. 00:29:10.569 [2024-07-26 11:17:29.815574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.569 [2024-07-26 11:17:29.815591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.569 qpair failed and we were unable to recover it. 00:29:10.569 [2024-07-26 11:17:29.816069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.569 [2024-07-26 11:17:29.816087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.569 qpair failed and we were unable to recover it. 00:29:10.569 [2024-07-26 11:17:29.816553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.570 [2024-07-26 11:17:29.816569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.570 qpair failed and we were unable to recover it. 00:29:10.570 [2024-07-26 11:17:29.817069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.570 [2024-07-26 11:17:29.817085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.570 qpair failed and we were unable to recover it. 00:29:10.570 [2024-07-26 11:17:29.817614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.570 [2024-07-26 11:17:29.817631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.570 qpair failed and we were unable to recover it. 00:29:10.570 [2024-07-26 11:17:29.818152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.570 [2024-07-26 11:17:29.818169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.570 qpair failed and we were unable to recover it. 00:29:10.570 [2024-07-26 11:17:29.818730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.570 [2024-07-26 11:17:29.818746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.570 qpair failed and we were unable to recover it. 00:29:10.570 [2024-07-26 11:17:29.819243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.570 [2024-07-26 11:17:29.819260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.570 qpair failed and we were unable to recover it. 00:29:10.570 [2024-07-26 11:17:29.819800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.570 [2024-07-26 11:17:29.819817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.570 qpair failed and we were unable to recover it. 00:29:10.570 [2024-07-26 11:17:29.820366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.570 [2024-07-26 11:17:29.820384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.570 qpair failed and we were unable to recover it. 00:29:10.570 [2024-07-26 11:17:29.820932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.570 [2024-07-26 11:17:29.820949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.570 qpair failed and we were unable to recover it. 00:29:10.570 [2024-07-26 11:17:29.821415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.570 [2024-07-26 11:17:29.821432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.570 qpair failed and we were unable to recover it. 00:29:10.570 [2024-07-26 11:17:29.821959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.570 [2024-07-26 11:17:29.821975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.570 qpair failed and we were unable to recover it. 00:29:10.570 [2024-07-26 11:17:29.822436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.570 [2024-07-26 11:17:29.822453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.570 qpair failed and we were unable to recover it. 00:29:10.570 [2024-07-26 11:17:29.823034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.570 [2024-07-26 11:17:29.823056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.570 qpair failed and we were unable to recover it. 00:29:10.570 [2024-07-26 11:17:29.823464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.570 [2024-07-26 11:17:29.823481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.570 qpair failed and we were unable to recover it. 00:29:10.570 [2024-07-26 11:17:29.823928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.570 [2024-07-26 11:17:29.823944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.570 qpair failed and we were unable to recover it. 00:29:10.570 [2024-07-26 11:17:29.824429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.570 [2024-07-26 11:17:29.824447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.570 qpair failed and we were unable to recover it. 00:29:10.570 [2024-07-26 11:17:29.824966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.570 [2024-07-26 11:17:29.824982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.570 qpair failed and we were unable to recover it. 00:29:10.570 [2024-07-26 11:17:29.825519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.570 [2024-07-26 11:17:29.825536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.570 qpair failed and we were unable to recover it. 00:29:10.570 [2024-07-26 11:17:29.826078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.570 [2024-07-26 11:17:29.826095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.570 qpair failed and we were unable to recover it. 00:29:10.570 [2024-07-26 11:17:29.826699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.570 [2024-07-26 11:17:29.826715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.570 qpair failed and we were unable to recover it. 00:29:10.570 [2024-07-26 11:17:29.827258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.570 [2024-07-26 11:17:29.827275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.570 qpair failed and we were unable to recover it. 00:29:10.570 [2024-07-26 11:17:29.827812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.570 [2024-07-26 11:17:29.827829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.570 qpair failed and we were unable to recover it. 00:29:10.570 [2024-07-26 11:17:29.828412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.570 [2024-07-26 11:17:29.828429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.570 qpair failed and we were unable to recover it. 00:29:10.570 [2024-07-26 11:17:29.828976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.570 [2024-07-26 11:17:29.828995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.570 qpair failed and we were unable to recover it. 00:29:10.570 [2024-07-26 11:17:29.829538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.570 [2024-07-26 11:17:29.829555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.570 qpair failed and we were unable to recover it. 00:29:10.570 [2024-07-26 11:17:29.830145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.570 [2024-07-26 11:17:29.830162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.570 qpair failed and we were unable to recover it. 00:29:10.570 [2024-07-26 11:17:29.830718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.570 [2024-07-26 11:17:29.830735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.570 qpair failed and we were unable to recover it. 00:29:10.570 [2024-07-26 11:17:29.831278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.570 [2024-07-26 11:17:29.831295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.570 qpair failed and we were unable to recover it. 00:29:10.570 [2024-07-26 11:17:29.831818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.570 [2024-07-26 11:17:29.831834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.570 qpair failed and we were unable to recover it. 00:29:10.570 [2024-07-26 11:17:29.832365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.570 [2024-07-26 11:17:29.832382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.570 qpair failed and we were unable to recover it. 00:29:10.570 [2024-07-26 11:17:29.832859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.570 [2024-07-26 11:17:29.832876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.570 qpair failed and we were unable to recover it. 00:29:10.570 [2024-07-26 11:17:29.833341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.570 [2024-07-26 11:17:29.833358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.570 qpair failed and we were unable to recover it. 00:29:10.570 [2024-07-26 11:17:29.833882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.570 [2024-07-26 11:17:29.833899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.570 qpair failed and we were unable to recover it. 00:29:10.570 [2024-07-26 11:17:29.834372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.570 [2024-07-26 11:17:29.834389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.570 qpair failed and we were unable to recover it. 00:29:10.570 [2024-07-26 11:17:29.834928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.570 [2024-07-26 11:17:29.834944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.570 qpair failed and we were unable to recover it. 00:29:10.570 [2024-07-26 11:17:29.835511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.571 [2024-07-26 11:17:29.835528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.571 qpair failed and we were unable to recover it. 00:29:10.571 [2024-07-26 11:17:29.836089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.571 [2024-07-26 11:17:29.836106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.571 qpair failed and we were unable to recover it. 00:29:10.571 [2024-07-26 11:17:29.836651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.571 [2024-07-26 11:17:29.836667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.571 qpair failed and we were unable to recover it. 00:29:10.571 [2024-07-26 11:17:29.837236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.571 [2024-07-26 11:17:29.837253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.571 qpair failed and we were unable to recover it. 00:29:10.571 [2024-07-26 11:17:29.837751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.571 [2024-07-26 11:17:29.837767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.571 qpair failed and we were unable to recover it. 00:29:10.571 [2024-07-26 11:17:29.838336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.571 [2024-07-26 11:17:29.838353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.571 qpair failed and we were unable to recover it. 00:29:10.571 [2024-07-26 11:17:29.838844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.571 [2024-07-26 11:17:29.838860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.571 qpair failed and we were unable to recover it. 00:29:10.571 [2024-07-26 11:17:29.839436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.571 [2024-07-26 11:17:29.839454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.571 qpair failed and we were unable to recover it. 00:29:10.571 [2024-07-26 11:17:29.840001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.571 [2024-07-26 11:17:29.840018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.571 qpair failed and we were unable to recover it. 00:29:10.571 [2024-07-26 11:17:29.840489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.571 [2024-07-26 11:17:29.840505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.571 qpair failed and we were unable to recover it. 00:29:10.571 [2024-07-26 11:17:29.840965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.571 [2024-07-26 11:17:29.840981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.571 qpair failed and we were unable to recover it. 00:29:10.571 [2024-07-26 11:17:29.841521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.571 [2024-07-26 11:17:29.841538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.571 qpair failed and we were unable to recover it. 00:29:10.571 [2024-07-26 11:17:29.842096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.571 [2024-07-26 11:17:29.842113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.571 qpair failed and we were unable to recover it. 00:29:10.571 [2024-07-26 11:17:29.842609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.571 [2024-07-26 11:17:29.842625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.571 qpair failed and we were unable to recover it. 00:29:10.571 [2024-07-26 11:17:29.843170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.571 [2024-07-26 11:17:29.843186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.571 qpair failed and we were unable to recover it. 00:29:10.571 [2024-07-26 11:17:29.843742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.571 [2024-07-26 11:17:29.843758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.571 qpair failed and we were unable to recover it. 00:29:10.571 [2024-07-26 11:17:29.844239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.571 [2024-07-26 11:17:29.844256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.571 qpair failed and we were unable to recover it. 00:29:10.571 [2024-07-26 11:17:29.844819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.571 [2024-07-26 11:17:29.844836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.571 qpair failed and we were unable to recover it. 00:29:10.571 [2024-07-26 11:17:29.845439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.571 [2024-07-26 11:17:29.845455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.571 qpair failed and we were unable to recover it. 00:29:10.571 [2024-07-26 11:17:29.846039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.571 [2024-07-26 11:17:29.846060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.571 qpair failed and we were unable to recover it. 00:29:10.571 [2024-07-26 11:17:29.846625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.571 [2024-07-26 11:17:29.846641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.571 qpair failed and we were unable to recover it. 00:29:10.571 [2024-07-26 11:17:29.847240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.571 [2024-07-26 11:17:29.847257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.571 qpair failed and we were unable to recover it. 00:29:10.571 [2024-07-26 11:17:29.847720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.571 [2024-07-26 11:17:29.847736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.571 qpair failed and we were unable to recover it. 00:29:10.571 [2024-07-26 11:17:29.848263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.571 [2024-07-26 11:17:29.848280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.571 qpair failed and we were unable to recover it. 00:29:10.571 [2024-07-26 11:17:29.848737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.571 [2024-07-26 11:17:29.848753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.571 qpair failed and we were unable to recover it. 00:29:10.571 [2024-07-26 11:17:29.849302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.571 [2024-07-26 11:17:29.849319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.571 qpair failed and we were unable to recover it. 00:29:10.571 [2024-07-26 11:17:29.849804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.571 [2024-07-26 11:17:29.849820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.571 qpair failed and we were unable to recover it. 00:29:10.571 [2024-07-26 11:17:29.850284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.571 [2024-07-26 11:17:29.850301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.572 qpair failed and we were unable to recover it. 00:29:10.572 [2024-07-26 11:17:29.850845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.572 [2024-07-26 11:17:29.850864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.572 qpair failed and we were unable to recover it. 00:29:10.572 [2024-07-26 11:17:29.851453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.572 [2024-07-26 11:17:29.851470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.572 qpair failed and we were unable to recover it. 00:29:10.572 [2024-07-26 11:17:29.851906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.572 [2024-07-26 11:17:29.851923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.572 qpair failed and we were unable to recover it. 00:29:10.572 [2024-07-26 11:17:29.852464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.572 [2024-07-26 11:17:29.852481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.572 qpair failed and we were unable to recover it. 00:29:10.572 [2024-07-26 11:17:29.853057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.572 [2024-07-26 11:17:29.853074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.572 qpair failed and we were unable to recover it. 00:29:10.572 [2024-07-26 11:17:29.853562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.572 [2024-07-26 11:17:29.853578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.572 qpair failed and we were unable to recover it. 00:29:10.572 [2024-07-26 11:17:29.854061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.572 [2024-07-26 11:17:29.854078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.572 qpair failed and we were unable to recover it. 00:29:10.572 [2024-07-26 11:17:29.854592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.572 [2024-07-26 11:17:29.854609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.572 qpair failed and we were unable to recover it. 00:29:10.572 [2024-07-26 11:17:29.855178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.572 [2024-07-26 11:17:29.855196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.572 qpair failed and we were unable to recover it. 00:29:10.572 [2024-07-26 11:17:29.855661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.572 [2024-07-26 11:17:29.855677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.572 qpair failed and we were unable to recover it. 00:29:10.572 [2024-07-26 11:17:29.856142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.572 [2024-07-26 11:17:29.856159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.572 qpair failed and we were unable to recover it. 00:29:10.572 [2024-07-26 11:17:29.856699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.572 [2024-07-26 11:17:29.856716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.572 qpair failed and we were unable to recover it. 00:29:10.572 [2024-07-26 11:17:29.857319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.572 [2024-07-26 11:17:29.857335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.572 qpair failed and we were unable to recover it. 00:29:10.572 [2024-07-26 11:17:29.857919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.572 [2024-07-26 11:17:29.857936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.572 qpair failed and we were unable to recover it. 00:29:10.572 [2024-07-26 11:17:29.858481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.572 [2024-07-26 11:17:29.858498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.572 qpair failed and we were unable to recover it. 00:29:10.572 [2024-07-26 11:17:29.858965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.572 [2024-07-26 11:17:29.858982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.572 qpair failed and we were unable to recover it. 00:29:10.572 [2024-07-26 11:17:29.859443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.572 [2024-07-26 11:17:29.859460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.572 qpair failed and we were unable to recover it. 00:29:10.572 [2024-07-26 11:17:29.859976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.572 [2024-07-26 11:17:29.859993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.572 qpair failed and we were unable to recover it. 00:29:10.572 [2024-07-26 11:17:29.860517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.572 [2024-07-26 11:17:29.860534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.572 qpair failed and we were unable to recover it. 00:29:10.572 [2024-07-26 11:17:29.861013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.572 [2024-07-26 11:17:29.861030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.572 qpair failed and we were unable to recover it. 00:29:10.572 [2024-07-26 11:17:29.861509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.572 [2024-07-26 11:17:29.861527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.572 qpair failed and we were unable to recover it. 00:29:10.572 [2024-07-26 11:17:29.862051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.572 [2024-07-26 11:17:29.862068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.572 qpair failed and we were unable to recover it. 00:29:10.572 [2024-07-26 11:17:29.862548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.572 [2024-07-26 11:17:29.862565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.572 qpair failed and we were unable to recover it. 00:29:10.572 [2024-07-26 11:17:29.863103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.572 [2024-07-26 11:17:29.863120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.572 qpair failed and we were unable to recover it. 00:29:10.572 [2024-07-26 11:17:29.863688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.572 [2024-07-26 11:17:29.863704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.572 qpair failed and we were unable to recover it. 00:29:10.573 [2024-07-26 11:17:29.864299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.573 [2024-07-26 11:17:29.864316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.573 qpair failed and we were unable to recover it. 00:29:10.573 [2024-07-26 11:17:29.864771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.573 [2024-07-26 11:17:29.864788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.573 qpair failed and we were unable to recover it. 00:29:10.573 [2024-07-26 11:17:29.865350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.573 [2024-07-26 11:17:29.865368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.573 qpair failed and we were unable to recover it. 00:29:10.573 [2024-07-26 11:17:29.865850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.573 [2024-07-26 11:17:29.865866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.573 qpair failed and we were unable to recover it. 00:29:10.573 [2024-07-26 11:17:29.866403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.573 [2024-07-26 11:17:29.866419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.573 qpair failed and we were unable to recover it. 00:29:10.573 [2024-07-26 11:17:29.866898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.573 [2024-07-26 11:17:29.866913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.573 qpair failed and we were unable to recover it. 00:29:10.573 [2024-07-26 11:17:29.867454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.573 [2024-07-26 11:17:29.867471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.573 qpair failed and we were unable to recover it. 00:29:10.573 [2024-07-26 11:17:29.867954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.573 [2024-07-26 11:17:29.867969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.573 qpair failed and we were unable to recover it. 00:29:10.573 [2024-07-26 11:17:29.868549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.573 [2024-07-26 11:17:29.868566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.573 qpair failed and we were unable to recover it. 00:29:10.573 [2024-07-26 11:17:29.869136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.573 [2024-07-26 11:17:29.869153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.573 qpair failed and we were unable to recover it. 00:29:10.573 [2024-07-26 11:17:29.869709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.573 [2024-07-26 11:17:29.869725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.573 qpair failed and we were unable to recover it. 00:29:10.573 [2024-07-26 11:17:29.870284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.573 [2024-07-26 11:17:29.870302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.573 qpair failed and we were unable to recover it. 00:29:10.573 [2024-07-26 11:17:29.870883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.573 [2024-07-26 11:17:29.870899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.573 qpair failed and we were unable to recover it. 00:29:10.573 [2024-07-26 11:17:29.871448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.573 [2024-07-26 11:17:29.871465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.573 qpair failed and we were unable to recover it. 00:29:10.573 [2024-07-26 11:17:29.871967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.573 [2024-07-26 11:17:29.871983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.573 qpair failed and we were unable to recover it. 00:29:10.573 [2024-07-26 11:17:29.872525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.573 [2024-07-26 11:17:29.872545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.573 qpair failed and we were unable to recover it. 00:29:10.573 [2024-07-26 11:17:29.873085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.573 [2024-07-26 11:17:29.873102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.573 qpair failed and we were unable to recover it. 00:29:10.573 [2024-07-26 11:17:29.873580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.573 [2024-07-26 11:17:29.873596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.573 qpair failed and we were unable to recover it. 00:29:10.573 [2024-07-26 11:17:29.874139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.573 [2024-07-26 11:17:29.874156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.573 qpair failed and we were unable to recover it. 00:29:10.573 [2024-07-26 11:17:29.874714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.573 [2024-07-26 11:17:29.874730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.573 qpair failed and we were unable to recover it. 00:29:10.573 [2024-07-26 11:17:29.875270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.573 [2024-07-26 11:17:29.875286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.573 qpair failed and we were unable to recover it. 00:29:10.573 [2024-07-26 11:17:29.875834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.573 [2024-07-26 11:17:29.875865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.573 qpair failed and we were unable to recover it. 00:29:10.573 [2024-07-26 11:17:29.876374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.573 [2024-07-26 11:17:29.876407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.573 qpair failed and we were unable to recover it. 00:29:10.573 [2024-07-26 11:17:29.876923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.573 [2024-07-26 11:17:29.876960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.573 qpair failed and we were unable to recover it. 00:29:10.573 [2024-07-26 11:17:29.877538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.573 [2024-07-26 11:17:29.877571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.573 qpair failed and we were unable to recover it. 00:29:10.573 [2024-07-26 11:17:29.878190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.573 [2024-07-26 11:17:29.878206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.573 qpair failed and we were unable to recover it. 00:29:10.573 [2024-07-26 11:17:29.878772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.573 [2024-07-26 11:17:29.878804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.573 qpair failed and we were unable to recover it. 00:29:10.573 [2024-07-26 11:17:29.879417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.573 [2024-07-26 11:17:29.879450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.573 qpair failed and we were unable to recover it. 00:29:10.573 [2024-07-26 11:17:29.880024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.573 [2024-07-26 11:17:29.880066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.573 qpair failed and we were unable to recover it. 00:29:10.573 [2024-07-26 11:17:29.880661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.573 [2024-07-26 11:17:29.880693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.573 qpair failed and we were unable to recover it. 00:29:10.573 [2024-07-26 11:17:29.881285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.573 [2024-07-26 11:17:29.881302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.573 qpair failed and we were unable to recover it. 00:29:10.573 [2024-07-26 11:17:29.881850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.573 [2024-07-26 11:17:29.881882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.573 qpair failed and we were unable to recover it. 00:29:10.573 [2024-07-26 11:17:29.882478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.573 [2024-07-26 11:17:29.882510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.573 qpair failed and we were unable to recover it. 00:29:10.573 [2024-07-26 11:17:29.883104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.573 [2024-07-26 11:17:29.883137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.573 qpair failed and we were unable to recover it. 00:29:10.573 [2024-07-26 11:17:29.883693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.573 [2024-07-26 11:17:29.883726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.573 qpair failed and we were unable to recover it. 00:29:10.573 [2024-07-26 11:17:29.884343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.573 [2024-07-26 11:17:29.884375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.573 qpair failed and we were unable to recover it. 00:29:10.573 [2024-07-26 11:17:29.884936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.574 [2024-07-26 11:17:29.884967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.574 qpair failed and we were unable to recover it. 00:29:10.574 [2024-07-26 11:17:29.885537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.574 [2024-07-26 11:17:29.885569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.574 qpair failed and we were unable to recover it. 00:29:10.574 [2024-07-26 11:17:29.886078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.574 [2024-07-26 11:17:29.886111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.574 qpair failed and we were unable to recover it. 00:29:10.574 [2024-07-26 11:17:29.886613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.574 [2024-07-26 11:17:29.886644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.574 qpair failed and we were unable to recover it. 00:29:10.574 [2024-07-26 11:17:29.887216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.574 [2024-07-26 11:17:29.887233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.574 qpair failed and we were unable to recover it. 00:29:10.574 [2024-07-26 11:17:29.887774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.574 [2024-07-26 11:17:29.887806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.574 qpair failed and we were unable to recover it. 00:29:10.574 [2024-07-26 11:17:29.888408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.574 [2024-07-26 11:17:29.888443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.574 qpair failed and we were unable to recover it. 00:29:10.574 [2024-07-26 11:17:29.889007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.574 [2024-07-26 11:17:29.889039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.574 qpair failed and we were unable to recover it. 00:29:10.574 [2024-07-26 11:17:29.889630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.574 [2024-07-26 11:17:29.889662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.574 qpair failed and we were unable to recover it. 00:29:10.574 [2024-07-26 11:17:29.890261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.574 [2024-07-26 11:17:29.890294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.574 qpair failed and we were unable to recover it. 00:29:10.574 [2024-07-26 11:17:29.890895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.574 [2024-07-26 11:17:29.890931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.574 qpair failed and we were unable to recover it. 00:29:10.574 [2024-07-26 11:17:29.891420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.574 [2024-07-26 11:17:29.891453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.574 qpair failed and we were unable to recover it. 00:29:10.574 [2024-07-26 11:17:29.892031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.574 [2024-07-26 11:17:29.892083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.574 qpair failed and we were unable to recover it. 00:29:10.574 [2024-07-26 11:17:29.892709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.574 [2024-07-26 11:17:29.892741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.574 qpair failed and we were unable to recover it. 00:29:10.574 [2024-07-26 11:17:29.893267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.574 [2024-07-26 11:17:29.893300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.574 qpair failed and we were unable to recover it. 00:29:10.574 [2024-07-26 11:17:29.893837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.574 [2024-07-26 11:17:29.893868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.574 qpair failed and we were unable to recover it. 00:29:10.574 [2024-07-26 11:17:29.894409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.574 [2024-07-26 11:17:29.894442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.574 qpair failed and we were unable to recover it. 00:29:10.574 [2024-07-26 11:17:29.895001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.574 [2024-07-26 11:17:29.895032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.574 qpair failed and we were unable to recover it. 00:29:10.574 [2024-07-26 11:17:29.895551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.574 [2024-07-26 11:17:29.895583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.574 qpair failed and we were unable to recover it. 00:29:10.574 [2024-07-26 11:17:29.896165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.574 [2024-07-26 11:17:29.896206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.574 qpair failed and we were unable to recover it. 00:29:10.574 [2024-07-26 11:17:29.896721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.574 [2024-07-26 11:17:29.896752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.574 qpair failed and we were unable to recover it. 00:29:10.574 [2024-07-26 11:17:29.897313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.574 [2024-07-26 11:17:29.897346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.574 qpair failed and we were unable to recover it. 00:29:10.574 [2024-07-26 11:17:29.897876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.574 [2024-07-26 11:17:29.897908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.574 qpair failed and we were unable to recover it. 00:29:10.574 [2024-07-26 11:17:29.898482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.574 [2024-07-26 11:17:29.898515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.574 qpair failed and we were unable to recover it. 00:29:10.574 [2024-07-26 11:17:29.899088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.574 [2024-07-26 11:17:29.899121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.574 qpair failed and we were unable to recover it. 00:29:10.574 [2024-07-26 11:17:29.899698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.574 [2024-07-26 11:17:29.899729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.574 qpair failed and we were unable to recover it. 00:29:10.574 [2024-07-26 11:17:29.900286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.574 [2024-07-26 11:17:29.900319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.574 qpair failed and we were unable to recover it. 00:29:10.574 [2024-07-26 11:17:29.900889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.574 [2024-07-26 11:17:29.900921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.574 qpair failed and we were unable to recover it. 00:29:10.574 [2024-07-26 11:17:29.901501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.574 [2024-07-26 11:17:29.901534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.574 qpair failed and we were unable to recover it. 00:29:10.574 [2024-07-26 11:17:29.902061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.574 [2024-07-26 11:17:29.902093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.574 qpair failed and we were unable to recover it. 00:29:10.574 [2024-07-26 11:17:29.902653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.574 [2024-07-26 11:17:29.902685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.574 qpair failed and we were unable to recover it. 00:29:10.574 [2024-07-26 11:17:29.903251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.574 [2024-07-26 11:17:29.903268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.574 qpair failed and we were unable to recover it. 00:29:10.574 [2024-07-26 11:17:29.903809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.574 [2024-07-26 11:17:29.903826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.574 qpair failed and we were unable to recover it. 00:29:10.574 [2024-07-26 11:17:29.904389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.574 [2024-07-26 11:17:29.904423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.575 qpair failed and we were unable to recover it. 00:29:10.575 [2024-07-26 11:17:29.904992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.575 [2024-07-26 11:17:29.905023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.575 qpair failed and we were unable to recover it. 00:29:10.575 [2024-07-26 11:17:29.905639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.575 [2024-07-26 11:17:29.905673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.575 qpair failed and we were unable to recover it. 00:29:10.575 [2024-07-26 11:17:29.906242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.575 [2024-07-26 11:17:29.906275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.575 qpair failed and we were unable to recover it. 00:29:10.575 [2024-07-26 11:17:29.906892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.575 [2024-07-26 11:17:29.906923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.575 qpair failed and we were unable to recover it. 00:29:10.575 [2024-07-26 11:17:29.907523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.575 [2024-07-26 11:17:29.907556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.575 qpair failed and we were unable to recover it. 00:29:10.575 [2024-07-26 11:17:29.908075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.575 [2024-07-26 11:17:29.908108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.575 qpair failed and we were unable to recover it. 00:29:10.575 [2024-07-26 11:17:29.908627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.575 [2024-07-26 11:17:29.908659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.575 qpair failed and we were unable to recover it. 00:29:10.575 [2024-07-26 11:17:29.909160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.575 [2024-07-26 11:17:29.909193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.575 qpair failed and we were unable to recover it. 00:29:10.575 [2024-07-26 11:17:29.909725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.575 [2024-07-26 11:17:29.909757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.575 qpair failed and we were unable to recover it. 00:29:10.575 [2024-07-26 11:17:29.910319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.575 [2024-07-26 11:17:29.910352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.575 qpair failed and we were unable to recover it. 00:29:10.575 [2024-07-26 11:17:29.910955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.575 [2024-07-26 11:17:29.910986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.575 qpair failed and we were unable to recover it. 00:29:10.575 [2024-07-26 11:17:29.911545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.575 [2024-07-26 11:17:29.911576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.575 qpair failed and we were unable to recover it. 00:29:10.575 [2024-07-26 11:17:29.912153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.575 [2024-07-26 11:17:29.912187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.575 qpair failed and we were unable to recover it. 00:29:10.575 [2024-07-26 11:17:29.912713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.575 [2024-07-26 11:17:29.912744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.575 qpair failed and we were unable to recover it. 00:29:10.575 [2024-07-26 11:17:29.913320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.575 [2024-07-26 11:17:29.913353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.575 qpair failed and we were unable to recover it. 00:29:10.575 [2024-07-26 11:17:29.913871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.575 [2024-07-26 11:17:29.913902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.575 qpair failed and we were unable to recover it. 00:29:10.575 [2024-07-26 11:17:29.914483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.575 [2024-07-26 11:17:29.914516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.575 qpair failed and we were unable to recover it. 00:29:10.575 [2024-07-26 11:17:29.915104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.575 [2024-07-26 11:17:29.915137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.575 qpair failed and we were unable to recover it. 00:29:10.575 [2024-07-26 11:17:29.915727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.575 [2024-07-26 11:17:29.915758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.575 qpair failed and we were unable to recover it. 00:29:10.575 [2024-07-26 11:17:29.916357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.575 [2024-07-26 11:17:29.916393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.575 qpair failed and we were unable to recover it. 00:29:10.575 [2024-07-26 11:17:29.916965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.575 [2024-07-26 11:17:29.916997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.575 qpair failed and we were unable to recover it. 00:29:10.575 [2024-07-26 11:17:29.917573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.575 [2024-07-26 11:17:29.917606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.575 qpair failed and we were unable to recover it. 00:29:10.575 [2024-07-26 11:17:29.918115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.575 [2024-07-26 11:17:29.918148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.575 qpair failed and we were unable to recover it. 00:29:10.575 [2024-07-26 11:17:29.918722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.575 [2024-07-26 11:17:29.918754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.575 qpair failed and we were unable to recover it. 00:29:10.575 [2024-07-26 11:17:29.919300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.575 [2024-07-26 11:17:29.919333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.575 qpair failed and we were unable to recover it. 00:29:10.575 [2024-07-26 11:17:29.919908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.575 [2024-07-26 11:17:29.919950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.575 qpair failed and we were unable to recover it. 00:29:10.575 [2024-07-26 11:17:29.920395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.575 [2024-07-26 11:17:29.920426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.575 qpair failed and we were unable to recover it. 00:29:10.575 [2024-07-26 11:17:29.920880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.575 [2024-07-26 11:17:29.920912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.575 qpair failed and we were unable to recover it. 00:29:10.575 [2024-07-26 11:17:29.921481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.575 [2024-07-26 11:17:29.921515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.575 qpair failed and we were unable to recover it. 00:29:10.575 [2024-07-26 11:17:29.922087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.575 [2024-07-26 11:17:29.922120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.575 qpair failed and we were unable to recover it. 00:29:10.575 [2024-07-26 11:17:29.922695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.575 [2024-07-26 11:17:29.922727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.575 qpair failed and we were unable to recover it. 00:29:10.575 [2024-07-26 11:17:29.923354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.575 [2024-07-26 11:17:29.923386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.575 qpair failed and we were unable to recover it. 00:29:10.575 [2024-07-26 11:17:29.924005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.575 [2024-07-26 11:17:29.924037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.575 qpair failed and we were unable to recover it. 00:29:10.575 [2024-07-26 11:17:29.924651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.575 [2024-07-26 11:17:29.924683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.575 qpair failed and we were unable to recover it. 00:29:10.575 [2024-07-26 11:17:29.925270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.575 [2024-07-26 11:17:29.925304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.575 qpair failed and we were unable to recover it. 00:29:10.575 [2024-07-26 11:17:29.925903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.575 [2024-07-26 11:17:29.925935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.575 qpair failed and we were unable to recover it. 00:29:10.575 [2024-07-26 11:17:29.926510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.575 [2024-07-26 11:17:29.926542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.575 qpair failed and we were unable to recover it. 00:29:10.576 [2024-07-26 11:17:29.927141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.576 [2024-07-26 11:17:29.927174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.576 qpair failed and we were unable to recover it. 00:29:10.576 [2024-07-26 11:17:29.927773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.576 [2024-07-26 11:17:29.927804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.576 qpair failed and we were unable to recover it. 00:29:10.576 [2024-07-26 11:17:29.928328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.576 [2024-07-26 11:17:29.928361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.576 qpair failed and we were unable to recover it. 00:29:10.576 [2024-07-26 11:17:29.928921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.576 [2024-07-26 11:17:29.928953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.576 qpair failed and we were unable to recover it. 00:29:10.576 [2024-07-26 11:17:29.929397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.576 [2024-07-26 11:17:29.929430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.576 qpair failed and we were unable to recover it. 00:29:10.576 [2024-07-26 11:17:29.929997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.576 [2024-07-26 11:17:29.930030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.576 qpair failed and we were unable to recover it. 00:29:10.576 [2024-07-26 11:17:29.930613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.576 [2024-07-26 11:17:29.930646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.576 qpair failed and we were unable to recover it. 00:29:10.576 [2024-07-26 11:17:29.931207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.576 [2024-07-26 11:17:29.931240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.576 qpair failed and we were unable to recover it. 00:29:10.576 [2024-07-26 11:17:29.931818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.576 [2024-07-26 11:17:29.931850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.576 qpair failed and we were unable to recover it. 00:29:10.576 [2024-07-26 11:17:29.932302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.576 [2024-07-26 11:17:29.932334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.576 qpair failed and we were unable to recover it. 00:29:10.576 [2024-07-26 11:17:29.932912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.576 [2024-07-26 11:17:29.932944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.576 qpair failed and we were unable to recover it. 00:29:10.576 [2024-07-26 11:17:29.933549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.576 [2024-07-26 11:17:29.933582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.576 qpair failed and we were unable to recover it. 00:29:10.576 [2024-07-26 11:17:29.934173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.576 [2024-07-26 11:17:29.934207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.576 qpair failed and we were unable to recover it. 00:29:10.576 [2024-07-26 11:17:29.934723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.576 [2024-07-26 11:17:29.934754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.576 qpair failed and we were unable to recover it. 00:29:10.576 [2024-07-26 11:17:29.935349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.576 [2024-07-26 11:17:29.935382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.576 qpair failed and we were unable to recover it. 00:29:10.576 [2024-07-26 11:17:29.935971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.576 [2024-07-26 11:17:29.936003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.576 qpair failed and we were unable to recover it. 00:29:10.576 [2024-07-26 11:17:29.936623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.576 [2024-07-26 11:17:29.936656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.576 qpair failed and we were unable to recover it. 00:29:10.576 [2024-07-26 11:17:29.937217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.576 [2024-07-26 11:17:29.937250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.576 qpair failed and we were unable to recover it. 00:29:10.576 [2024-07-26 11:17:29.937869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.576 [2024-07-26 11:17:29.937900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.576 qpair failed and we were unable to recover it. 00:29:10.576 [2024-07-26 11:17:29.938484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.576 [2024-07-26 11:17:29.938517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.576 qpair failed and we were unable to recover it. 00:29:10.576 [2024-07-26 11:17:29.939108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.576 [2024-07-26 11:17:29.939140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.576 qpair failed and we were unable to recover it. 00:29:10.576 [2024-07-26 11:17:29.939759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.576 [2024-07-26 11:17:29.939790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.576 qpair failed and we were unable to recover it. 00:29:10.576 [2024-07-26 11:17:29.940370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.576 [2024-07-26 11:17:29.940403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.576 qpair failed and we were unable to recover it. 00:29:10.576 [2024-07-26 11:17:29.941019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.576 [2024-07-26 11:17:29.941064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.576 qpair failed and we were unable to recover it. 00:29:10.576 [2024-07-26 11:17:29.941636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.576 [2024-07-26 11:17:29.941668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.576 qpair failed and we were unable to recover it. 00:29:10.576 [2024-07-26 11:17:29.942269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.576 [2024-07-26 11:17:29.942302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.576 qpair failed and we were unable to recover it. 00:29:10.576 [2024-07-26 11:17:29.942893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.576 [2024-07-26 11:17:29.942925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.576 qpair failed and we were unable to recover it. 00:29:10.576 [2024-07-26 11:17:29.943495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.576 [2024-07-26 11:17:29.943527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.576 qpair failed and we were unable to recover it. 00:29:10.576 [2024-07-26 11:17:29.944026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.576 [2024-07-26 11:17:29.944073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.576 qpair failed and we were unable to recover it. 00:29:10.576 [2024-07-26 11:17:29.944593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.576 [2024-07-26 11:17:29.944625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.576 qpair failed and we were unable to recover it. 00:29:10.576 [2024-07-26 11:17:29.945191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.576 [2024-07-26 11:17:29.945208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.576 qpair failed and we were unable to recover it. 00:29:10.576 [2024-07-26 11:17:29.945763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.576 [2024-07-26 11:17:29.945795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.576 qpair failed and we were unable to recover it. 00:29:10.576 [2024-07-26 11:17:29.946356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.576 [2024-07-26 11:17:29.946389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.576 qpair failed and we were unable to recover it. 00:29:10.576 [2024-07-26 11:17:29.946906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.576 [2024-07-26 11:17:29.946938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.576 qpair failed and we were unable to recover it. 00:29:10.576 [2024-07-26 11:17:29.947504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.576 [2024-07-26 11:17:29.947537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.576 qpair failed and we were unable to recover it. 00:29:10.576 [2024-07-26 11:17:29.948134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.576 [2024-07-26 11:17:29.948166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.576 qpair failed and we were unable to recover it. 00:29:10.576 [2024-07-26 11:17:29.948767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.576 [2024-07-26 11:17:29.948798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.577 qpair failed and we were unable to recover it. 00:29:10.577 [2024-07-26 11:17:29.949311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.577 [2024-07-26 11:17:29.949344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.577 qpair failed and we were unable to recover it. 00:29:10.577 [2024-07-26 11:17:29.949883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.577 [2024-07-26 11:17:29.949914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.577 qpair failed and we were unable to recover it. 00:29:10.577 [2024-07-26 11:17:29.950442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.577 [2024-07-26 11:17:29.950475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.577 qpair failed and we were unable to recover it. 00:29:10.577 [2024-07-26 11:17:29.951071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.577 [2024-07-26 11:17:29.951105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.577 qpair failed and we were unable to recover it. 00:29:10.577 [2024-07-26 11:17:29.951628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.577 [2024-07-26 11:17:29.951659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.577 qpair failed and we were unable to recover it. 00:29:10.577 [2024-07-26 11:17:29.952114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.577 [2024-07-26 11:17:29.952162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.577 qpair failed and we were unable to recover it. 00:29:10.577 [2024-07-26 11:17:29.952767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.577 [2024-07-26 11:17:29.952799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.577 qpair failed and we were unable to recover it. 00:29:10.577 [2024-07-26 11:17:29.953418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.577 [2024-07-26 11:17:29.953451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.577 qpair failed and we were unable to recover it. 00:29:10.577 [2024-07-26 11:17:29.954022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.577 [2024-07-26 11:17:29.954065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.577 qpair failed and we were unable to recover it. 00:29:10.577 [2024-07-26 11:17:29.954600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.577 [2024-07-26 11:17:29.954631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.577 qpair failed and we were unable to recover it. 00:29:10.577 [2024-07-26 11:17:29.955202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.577 [2024-07-26 11:17:29.955235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.577 qpair failed and we were unable to recover it. 00:29:10.577 [2024-07-26 11:17:29.955824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.577 [2024-07-26 11:17:29.955856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.577 qpair failed and we were unable to recover it. 00:29:10.577 [2024-07-26 11:17:29.956384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.577 [2024-07-26 11:17:29.956416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.577 qpair failed and we were unable to recover it. 00:29:10.577 [2024-07-26 11:17:29.956919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.577 [2024-07-26 11:17:29.956951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.577 qpair failed and we were unable to recover it. 00:29:10.577 [2024-07-26 11:17:29.957473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.577 [2024-07-26 11:17:29.957505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.577 qpair failed and we were unable to recover it. 00:29:10.577 [2024-07-26 11:17:29.958073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.577 [2024-07-26 11:17:29.958106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.577 qpair failed and we were unable to recover it. 00:29:10.577 [2024-07-26 11:17:29.958683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.577 [2024-07-26 11:17:29.958715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.577 qpair failed and we were unable to recover it. 00:29:10.577 [2024-07-26 11:17:29.959335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.577 [2024-07-26 11:17:29.959367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.577 qpair failed and we were unable to recover it. 00:29:10.577 [2024-07-26 11:17:29.959880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.577 [2024-07-26 11:17:29.959912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.577 qpair failed and we were unable to recover it. 00:29:10.577 [2024-07-26 11:17:29.960415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.577 [2024-07-26 11:17:29.960447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.577 qpair failed and we were unable to recover it. 00:29:10.577 [2024-07-26 11:17:29.960968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.577 [2024-07-26 11:17:29.961000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.577 qpair failed and we were unable to recover it. 00:29:10.577 [2024-07-26 11:17:29.961579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.577 [2024-07-26 11:17:29.961612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.577 qpair failed and we were unable to recover it. 00:29:10.577 [2024-07-26 11:17:29.962182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.577 [2024-07-26 11:17:29.962214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.577 qpair failed and we were unable to recover it. 00:29:10.577 [2024-07-26 11:17:29.962733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.577 [2024-07-26 11:17:29.962765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.577 qpair failed and we were unable to recover it. 00:29:10.577 [2024-07-26 11:17:29.963329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.577 [2024-07-26 11:17:29.963362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.577 qpair failed and we were unable to recover it. 00:29:10.577 [2024-07-26 11:17:29.963947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.577 [2024-07-26 11:17:29.963978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.577 qpair failed and we were unable to recover it. 00:29:10.577 [2024-07-26 11:17:29.964527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.577 [2024-07-26 11:17:29.964559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.577 qpair failed and we were unable to recover it. 00:29:10.577 [2024-07-26 11:17:29.965129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.577 [2024-07-26 11:17:29.965162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.577 qpair failed and we were unable to recover it. 00:29:10.577 [2024-07-26 11:17:29.965738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.577 [2024-07-26 11:17:29.965770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.577 qpair failed and we were unable to recover it. 00:29:10.577 [2024-07-26 11:17:29.966394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.577 [2024-07-26 11:17:29.966426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.577 qpair failed and we were unable to recover it. 00:29:10.578 [2024-07-26 11:17:29.967031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.578 [2024-07-26 11:17:29.967074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.578 qpair failed and we were unable to recover it. 00:29:10.578 [2024-07-26 11:17:29.967632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.578 [2024-07-26 11:17:29.967668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.578 qpair failed and we were unable to recover it. 00:29:10.578 [2024-07-26 11:17:29.968241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.578 [2024-07-26 11:17:29.968274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.578 qpair failed and we were unable to recover it. 00:29:10.578 [2024-07-26 11:17:29.968875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.578 [2024-07-26 11:17:29.968907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.578 qpair failed and we were unable to recover it. 00:29:10.578 [2024-07-26 11:17:29.969484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.578 [2024-07-26 11:17:29.969516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.578 qpair failed and we were unable to recover it. 00:29:10.578 [2024-07-26 11:17:29.970112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.578 [2024-07-26 11:17:29.970156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.578 qpair failed and we were unable to recover it. 00:29:10.578 [2024-07-26 11:17:29.970750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.578 [2024-07-26 11:17:29.970782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.578 qpair failed and we were unable to recover it. 00:29:10.578 [2024-07-26 11:17:29.971381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.578 [2024-07-26 11:17:29.971413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.578 qpair failed and we were unable to recover it. 00:29:10.578 [2024-07-26 11:17:29.971952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.578 [2024-07-26 11:17:29.971984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.578 qpair failed and we were unable to recover it. 00:29:10.578 [2024-07-26 11:17:29.972513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.578 [2024-07-26 11:17:29.972545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.578 qpair failed and we were unable to recover it. 00:29:10.578 [2024-07-26 11:17:29.973127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.578 [2024-07-26 11:17:29.973161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.578 qpair failed and we were unable to recover it. 00:29:10.578 [2024-07-26 11:17:29.973675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.578 [2024-07-26 11:17:29.973707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.578 qpair failed and we were unable to recover it. 00:29:10.578 [2024-07-26 11:17:29.974266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.578 [2024-07-26 11:17:29.974298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.578 qpair failed and we were unable to recover it. 00:29:10.578 [2024-07-26 11:17:29.974898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.578 [2024-07-26 11:17:29.974930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.578 qpair failed and we were unable to recover it. 00:29:10.578 [2024-07-26 11:17:29.975446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.578 [2024-07-26 11:17:29.975478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.578 qpair failed and we were unable to recover it. 00:29:10.578 [2024-07-26 11:17:29.976040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.578 [2024-07-26 11:17:29.976082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.578 qpair failed and we were unable to recover it. 00:29:10.578 [2024-07-26 11:17:29.976677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.578 [2024-07-26 11:17:29.976709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.578 qpair failed and we were unable to recover it. 00:29:10.578 [2024-07-26 11:17:29.977307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.578 [2024-07-26 11:17:29.977340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.578 qpair failed and we were unable to recover it. 00:29:10.578 [2024-07-26 11:17:29.977938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.578 [2024-07-26 11:17:29.977970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.578 qpair failed and we were unable to recover it. 00:29:10.578 [2024-07-26 11:17:29.978487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.578 [2024-07-26 11:17:29.978519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.578 qpair failed and we were unable to recover it. 00:29:10.578 [2024-07-26 11:17:29.979091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.578 [2024-07-26 11:17:29.979124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.578 qpair failed and we were unable to recover it. 00:29:10.578 [2024-07-26 11:17:29.979720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.578 [2024-07-26 11:17:29.979753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.578 qpair failed and we were unable to recover it. 00:29:10.578 [2024-07-26 11:17:29.980363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.578 [2024-07-26 11:17:29.980395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.578 qpair failed and we were unable to recover it. 00:29:10.578 [2024-07-26 11:17:29.980956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.578 [2024-07-26 11:17:29.980988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.578 qpair failed and we were unable to recover it. 00:29:10.578 [2024-07-26 11:17:29.981608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.578 [2024-07-26 11:17:29.981640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.578 qpair failed and we were unable to recover it. 00:29:10.578 [2024-07-26 11:17:29.982228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.578 [2024-07-26 11:17:29.982261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.578 qpair failed and we were unable to recover it. 00:29:10.578 [2024-07-26 11:17:29.982842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.578 [2024-07-26 11:17:29.982874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.578 qpair failed and we were unable to recover it. 00:29:10.578 [2024-07-26 11:17:29.983471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.578 [2024-07-26 11:17:29.983504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.578 qpair failed and we were unable to recover it. 00:29:10.578 [2024-07-26 11:17:29.984088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.578 [2024-07-26 11:17:29.984121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.578 qpair failed and we were unable to recover it. 00:29:10.578 [2024-07-26 11:17:29.984705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.578 [2024-07-26 11:17:29.984736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.578 qpair failed and we were unable to recover it. 00:29:10.578 [2024-07-26 11:17:29.985286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.578 [2024-07-26 11:17:29.985319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.578 qpair failed and we were unable to recover it. 00:29:10.578 [2024-07-26 11:17:29.985945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.578 [2024-07-26 11:17:29.985976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.578 qpair failed and we were unable to recover it. 00:29:10.578 [2024-07-26 11:17:29.986571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.578 [2024-07-26 11:17:29.986604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.578 qpair failed and we were unable to recover it. 00:29:10.578 [2024-07-26 11:17:29.987134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.578 [2024-07-26 11:17:29.987166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.578 qpair failed and we were unable to recover it. 00:29:10.578 [2024-07-26 11:17:29.987726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.578 [2024-07-26 11:17:29.987758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.578 qpair failed and we were unable to recover it. 00:29:10.578 [2024-07-26 11:17:29.988373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.578 [2024-07-26 11:17:29.988418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.578 qpair failed and we were unable to recover it. 00:29:10.578 [2024-07-26 11:17:29.989039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.578 [2024-07-26 11:17:29.989088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.578 qpair failed and we were unable to recover it. 00:29:10.578 [2024-07-26 11:17:29.989708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.579 [2024-07-26 11:17:29.989740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.579 qpair failed and we were unable to recover it. 00:29:10.579 [2024-07-26 11:17:29.990318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.579 [2024-07-26 11:17:29.990352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.579 qpair failed and we were unable to recover it. 00:29:10.579 [2024-07-26 11:17:29.990926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.579 [2024-07-26 11:17:29.990958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.579 qpair failed and we were unable to recover it. 00:29:10.579 [2024-07-26 11:17:29.991681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.579 [2024-07-26 11:17:29.991766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.579 qpair failed and we were unable to recover it. 00:29:10.579 [2024-07-26 11:17:29.992420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.579 [2024-07-26 11:17:29.992472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.579 qpair failed and we were unable to recover it. 00:29:10.579 [2024-07-26 11:17:29.993113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.579 [2024-07-26 11:17:29.993148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.579 qpair failed and we were unable to recover it. 00:29:10.579 [2024-07-26 11:17:29.993743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.579 [2024-07-26 11:17:29.993775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.579 qpair failed and we were unable to recover it. 00:29:10.579 [2024-07-26 11:17:29.994386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.579 [2024-07-26 11:17:29.994420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.579 qpair failed and we were unable to recover it. 00:29:10.579 [2024-07-26 11:17:29.994969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.579 [2024-07-26 11:17:29.995001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.579 qpair failed and we were unable to recover it. 00:29:10.579 [2024-07-26 11:17:29.995713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.579 [2024-07-26 11:17:29.995750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.579 qpair failed and we were unable to recover it. 00:29:10.579 [2024-07-26 11:17:29.996353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.579 [2024-07-26 11:17:29.996385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.579 qpair failed and we were unable to recover it. 00:29:10.579 [2024-07-26 11:17:29.996945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.579 [2024-07-26 11:17:29.996978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.579 qpair failed and we were unable to recover it. 00:29:10.579 [2024-07-26 11:17:29.997595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.579 [2024-07-26 11:17:29.997628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.579 qpair failed and we were unable to recover it. 00:29:10.579 [2024-07-26 11:17:29.998554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.579 [2024-07-26 11:17:29.998590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.579 qpair failed and we were unable to recover it. 00:29:10.579 [2024-07-26 11:17:29.999181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.579 [2024-07-26 11:17:29.999197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.579 qpair failed and we were unable to recover it. 00:29:10.579 [2024-07-26 11:17:29.999729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.579 [2024-07-26 11:17:29.999745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.579 qpair failed and we were unable to recover it. 00:29:10.579 [2024-07-26 11:17:30.000276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.579 [2024-07-26 11:17:30.000293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.579 qpair failed and we were unable to recover it. 00:29:10.579 [2024-07-26 11:17:30.000795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.579 [2024-07-26 11:17:30.000827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.579 qpair failed and we were unable to recover it. 00:29:10.579 [2024-07-26 11:17:30.001412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.579 [2024-07-26 11:17:30.001445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.579 qpair failed and we were unable to recover it. 00:29:10.579 [2024-07-26 11:17:30.002035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.579 [2024-07-26 11:17:30.002062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.579 qpair failed and we were unable to recover it. 00:29:10.579 [2024-07-26 11:17:30.002533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.579 [2024-07-26 11:17:30.002550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.579 qpair failed and we were unable to recover it. 00:29:10.579 [2024-07-26 11:17:30.003354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.579 [2024-07-26 11:17:30.003371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.579 qpair failed and we were unable to recover it. 00:29:10.579 [2024-07-26 11:17:30.003912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.579 [2024-07-26 11:17:30.003929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.579 qpair failed and we were unable to recover it. 00:29:10.579 [2024-07-26 11:17:30.004402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.579 [2024-07-26 11:17:30.004419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.579 qpair failed and we were unable to recover it. 00:29:10.579 [2024-07-26 11:17:30.004882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.579 [2024-07-26 11:17:30.004899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.579 qpair failed and we were unable to recover it. 00:29:10.579 [2024-07-26 11:17:30.005392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.579 [2024-07-26 11:17:30.005409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.579 qpair failed and we were unable to recover it. 00:29:10.579 [2024-07-26 11:17:30.005969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.579 [2024-07-26 11:17:30.005986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.579 qpair failed and we were unable to recover it. 00:29:10.579 [2024-07-26 11:17:30.006547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.579 [2024-07-26 11:17:30.006564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.579 qpair failed and we were unable to recover it. 00:29:10.579 [2024-07-26 11:17:30.007053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.579 [2024-07-26 11:17:30.007070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.579 qpair failed and we were unable to recover it. 00:29:10.579 [2024-07-26 11:17:30.007549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.579 [2024-07-26 11:17:30.007565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.579 qpair failed and we were unable to recover it. 00:29:10.579 [2024-07-26 11:17:30.008121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.579 [2024-07-26 11:17:30.008138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.579 qpair failed and we were unable to recover it. 00:29:10.579 [2024-07-26 11:17:30.008652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.579 [2024-07-26 11:17:30.008669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.579 qpair failed and we were unable to recover it. 00:29:10.579 [2024-07-26 11:17:30.009152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.579 [2024-07-26 11:17:30.009169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.579 qpair failed and we were unable to recover it. 00:29:10.579 [2024-07-26 11:17:30.009651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.579 [2024-07-26 11:17:30.009668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.579 qpair failed and we were unable to recover it. 00:29:10.579 [2024-07-26 11:17:30.010184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.579 [2024-07-26 11:17:30.010201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.579 qpair failed and we were unable to recover it. 00:29:10.580 [2024-07-26 11:17:30.010675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.580 [2024-07-26 11:17:30.010692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.580 qpair failed and we were unable to recover it. 00:29:10.580 [2024-07-26 11:17:30.011119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.580 [2024-07-26 11:17:30.011135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.580 qpair failed and we were unable to recover it. 00:29:10.580 [2024-07-26 11:17:30.011604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.580 [2024-07-26 11:17:30.011620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.580 qpair failed and we were unable to recover it. 00:29:10.580 [2024-07-26 11:17:30.012241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.580 [2024-07-26 11:17:30.012259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.580 qpair failed and we were unable to recover it. 00:29:10.580 [2024-07-26 11:17:30.012856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.580 [2024-07-26 11:17:30.012882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.580 qpair failed and we were unable to recover it. 00:29:10.580 [2024-07-26 11:17:30.013364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.580 [2024-07-26 11:17:30.013391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.580 qpair failed and we were unable to recover it. 00:29:10.580 [2024-07-26 11:17:30.014023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.580 [2024-07-26 11:17:30.014063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.580 qpair failed and we were unable to recover it. 00:29:10.580 [2024-07-26 11:17:30.014699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.580 [2024-07-26 11:17:30.014776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3438000b90 with addr=10.0.0.2, port=4420 00:29:10.580 qpair failed and we were unable to recover it. 00:29:10.580 [2024-07-26 11:17:30.015925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.580 [2024-07-26 11:17:30.016221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.580 qpair failed and we were unable to recover it. 00:29:10.580 [2024-07-26 11:17:30.016921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.580 [2024-07-26 11:17:30.017017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.580 qpair failed and we were unable to recover it. 00:29:10.580 [2024-07-26 11:17:30.017713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.580 [2024-07-26 11:17:30.017746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.580 qpair failed and we were unable to recover it. 00:29:10.580 [2024-07-26 11:17:30.018308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.580 [2024-07-26 11:17:30.018327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.580 qpair failed and we were unable to recover it. 00:29:10.580 [2024-07-26 11:17:30.018824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.580 [2024-07-26 11:17:30.018840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.580 qpair failed and we were unable to recover it. 00:29:10.580 [2024-07-26 11:17:30.019326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.580 [2024-07-26 11:17:30.019345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.580 qpair failed and we were unable to recover it. 00:29:10.580 [2024-07-26 11:17:30.019813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.580 [2024-07-26 11:17:30.019830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.580 qpair failed and we were unable to recover it. 00:29:10.580 [2024-07-26 11:17:30.020355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.580 [2024-07-26 11:17:30.020372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.580 qpair failed and we were unable to recover it. 00:29:10.580 [2024-07-26 11:17:30.021203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.580 [2024-07-26 11:17:30.021221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.580 qpair failed and we were unable to recover it. 00:29:10.580 [2024-07-26 11:17:30.021779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.580 [2024-07-26 11:17:30.021796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.580 qpair failed and we were unable to recover it. 00:29:10.580 [2024-07-26 11:17:30.022263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.580 [2024-07-26 11:17:30.022280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.580 qpair failed and we were unable to recover it. 00:29:10.580 [2024-07-26 11:17:30.022784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.580 [2024-07-26 11:17:30.022800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.580 qpair failed and we were unable to recover it. 00:29:10.580 [2024-07-26 11:17:30.023228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.580 [2024-07-26 11:17:30.023245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.580 qpair failed and we were unable to recover it. 00:29:10.580 [2024-07-26 11:17:30.023785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.580 [2024-07-26 11:17:30.023802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.580 qpair failed and we were unable to recover it. 00:29:10.580 [2024-07-26 11:17:30.024345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.580 [2024-07-26 11:17:30.024363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.580 qpair failed and we were unable to recover it. 00:29:10.580 [2024-07-26 11:17:30.024795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.580 [2024-07-26 11:17:30.024812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.580 qpair failed and we were unable to recover it. 00:29:10.580 [2024-07-26 11:17:30.025333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.580 [2024-07-26 11:17:30.025350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.580 qpair failed and we were unable to recover it. 00:29:10.580 [2024-07-26 11:17:30.025916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.580 [2024-07-26 11:17:30.025933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.580 qpair failed and we were unable to recover it. 00:29:10.580 [2024-07-26 11:17:30.026461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.580 [2024-07-26 11:17:30.026478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.580 qpair failed and we were unable to recover it. 00:29:10.580 [2024-07-26 11:17:30.027018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.581 [2024-07-26 11:17:30.027036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.581 qpair failed and we were unable to recover it. 00:29:10.581 [2024-07-26 11:17:30.027433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.581 [2024-07-26 11:17:30.027449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.581 qpair failed and we were unable to recover it. 00:29:10.581 [2024-07-26 11:17:30.027935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.581 [2024-07-26 11:17:30.027953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.581 qpair failed and we were unable to recover it. 00:29:10.581 [2024-07-26 11:17:30.028499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.581 [2024-07-26 11:17:30.028516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.581 qpair failed and we were unable to recover it. 00:29:10.581 [2024-07-26 11:17:30.028999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.581 [2024-07-26 11:17:30.029016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.581 qpair failed and we were unable to recover it. 00:29:10.581 [2024-07-26 11:17:30.029591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.581 [2024-07-26 11:17:30.029609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.581 qpair failed and we were unable to recover it. 00:29:10.581 [2024-07-26 11:17:30.030196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.581 [2024-07-26 11:17:30.030213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.581 qpair failed and we were unable to recover it. 00:29:10.581 [2024-07-26 11:17:30.030800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.581 [2024-07-26 11:17:30.030817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.581 qpair failed and we were unable to recover it. 00:29:10.581 [2024-07-26 11:17:30.031228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.581 [2024-07-26 11:17:30.031245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.581 qpair failed and we were unable to recover it. 00:29:10.581 [2024-07-26 11:17:30.031782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.581 [2024-07-26 11:17:30.031799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.581 qpair failed and we were unable to recover it. 00:29:10.581 [2024-07-26 11:17:30.032338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.581 [2024-07-26 11:17:30.032355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.581 qpair failed and we were unable to recover it. 00:29:10.581 [2024-07-26 11:17:30.032829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.581 [2024-07-26 11:17:30.032846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.581 qpair failed and we were unable to recover it. 00:29:10.581 [2024-07-26 11:17:30.033315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.581 [2024-07-26 11:17:30.033332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.581 qpair failed and we were unable to recover it. 00:29:10.581 [2024-07-26 11:17:30.033848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.581 [2024-07-26 11:17:30.033865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.581 qpair failed and we were unable to recover it. 00:29:10.581 [2024-07-26 11:17:30.034476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.581 [2024-07-26 11:17:30.034493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.581 qpair failed and we were unable to recover it. 00:29:10.581 [2024-07-26 11:17:30.035053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.581 [2024-07-26 11:17:30.035070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.581 qpair failed and we were unable to recover it. 00:29:10.581 [2024-07-26 11:17:30.035477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.581 [2024-07-26 11:17:30.035493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.581 qpair failed and we were unable to recover it. 00:29:10.581 [2024-07-26 11:17:30.035938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.581 [2024-07-26 11:17:30.035954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.581 qpair failed and we were unable to recover it. 00:29:10.581 [2024-07-26 11:17:30.036492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.581 [2024-07-26 11:17:30.036508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.581 qpair failed and we were unable to recover it. 00:29:10.581 [2024-07-26 11:17:30.036987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.581 [2024-07-26 11:17:30.037004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.581 qpair failed and we were unable to recover it. 00:29:10.581 [2024-07-26 11:17:30.037543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.581 [2024-07-26 11:17:30.037560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.581 qpair failed and we were unable to recover it. 00:29:10.581 [2024-07-26 11:17:30.038025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.581 [2024-07-26 11:17:30.038041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.581 qpair failed and we were unable to recover it. 00:29:10.581 [2024-07-26 11:17:30.038589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.581 [2024-07-26 11:17:30.038612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.581 qpair failed and we were unable to recover it. 00:29:10.581 [2024-07-26 11:17:30.039028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.581 [2024-07-26 11:17:30.039050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.581 qpair failed and we were unable to recover it. 00:29:10.581 [2024-07-26 11:17:30.039458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.581 [2024-07-26 11:17:30.039474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.581 qpair failed and we were unable to recover it. 00:29:10.581 [2024-07-26 11:17:30.039955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.581 [2024-07-26 11:17:30.039972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.581 qpair failed and we were unable to recover it. 00:29:10.581 [2024-07-26 11:17:30.040507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.581 [2024-07-26 11:17:30.040524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.581 qpair failed and we were unable to recover it. 00:29:10.581 [2024-07-26 11:17:30.041049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.581 [2024-07-26 11:17:30.041066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.581 qpair failed and we were unable to recover it. 00:29:10.581 [2024-07-26 11:17:30.041374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.581 [2024-07-26 11:17:30.041391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.581 qpair failed and we were unable to recover it. 00:29:10.581 [2024-07-26 11:17:30.041803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.581 [2024-07-26 11:17:30.041820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.581 qpair failed and we were unable to recover it. 00:29:10.581 [2024-07-26 11:17:30.042305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.581 [2024-07-26 11:17:30.042323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.581 qpair failed and we were unable to recover it. 00:29:10.581 [2024-07-26 11:17:30.042867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.581 [2024-07-26 11:17:30.042883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.581 qpair failed and we were unable to recover it. 00:29:10.581 [2024-07-26 11:17:30.043352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.581 [2024-07-26 11:17:30.043369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.581 qpair failed and we were unable to recover it. 00:29:10.581 [2024-07-26 11:17:30.043901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.581 [2024-07-26 11:17:30.043917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.581 qpair failed and we were unable to recover it. 00:29:10.581 [2024-07-26 11:17:30.044208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.581 [2024-07-26 11:17:30.044225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.581 qpair failed and we were unable to recover it. 00:29:10.581 [2024-07-26 11:17:30.044743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.581 [2024-07-26 11:17:30.044758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.582 qpair failed and we were unable to recover it. 00:29:10.582 [2024-07-26 11:17:30.045218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.582 [2024-07-26 11:17:30.045234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.582 qpair failed and we were unable to recover it. 00:29:10.582 [2024-07-26 11:17:30.045702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.582 [2024-07-26 11:17:30.045719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.582 qpair failed and we were unable to recover it. 00:29:10.582 [2024-07-26 11:17:30.046174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.582 [2024-07-26 11:17:30.046192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.582 qpair failed and we were unable to recover it. 00:29:10.582 [2024-07-26 11:17:30.046675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.582 [2024-07-26 11:17:30.046690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.582 qpair failed and we were unable to recover it. 00:29:10.582 [2024-07-26 11:17:30.047199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.582 [2024-07-26 11:17:30.047216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.582 qpair failed and we were unable to recover it. 00:29:10.582 [2024-07-26 11:17:30.047617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.582 [2024-07-26 11:17:30.047634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.582 qpair failed and we were unable to recover it. 00:29:10.582 [2024-07-26 11:17:30.048098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.582 [2024-07-26 11:17:30.048115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.582 qpair failed and we were unable to recover it. 00:29:10.582 [2024-07-26 11:17:30.048626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.582 [2024-07-26 11:17:30.048643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.582 qpair failed and we were unable to recover it. 00:29:10.582 [2024-07-26 11:17:30.049115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.582 [2024-07-26 11:17:30.049132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.582 qpair failed and we were unable to recover it. 00:29:10.582 [2024-07-26 11:17:30.049613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.582 [2024-07-26 11:17:30.049630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.582 qpair failed and we were unable to recover it. 00:29:10.582 [2024-07-26 11:17:30.050142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.582 [2024-07-26 11:17:30.050159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.582 qpair failed and we were unable to recover it. 00:29:10.582 [2024-07-26 11:17:30.050620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.582 [2024-07-26 11:17:30.050636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.582 qpair failed and we were unable to recover it. 00:29:10.582 [2024-07-26 11:17:30.051149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.582 [2024-07-26 11:17:30.051166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.582 qpair failed and we were unable to recover it. 00:29:10.582 [2024-07-26 11:17:30.051703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.582 [2024-07-26 11:17:30.051720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.582 qpair failed and we were unable to recover it. 00:29:10.582 [2024-07-26 11:17:30.052248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.582 [2024-07-26 11:17:30.052265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.582 qpair failed and we were unable to recover it. 00:29:10.582 [2024-07-26 11:17:30.052746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.582 [2024-07-26 11:17:30.052762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.582 qpair failed and we were unable to recover it. 00:29:10.582 [2024-07-26 11:17:30.053243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.582 [2024-07-26 11:17:30.053261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.582 qpair failed and we were unable to recover it. 00:29:10.582 [2024-07-26 11:17:30.053720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.582 [2024-07-26 11:17:30.053736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.582 qpair failed and we were unable to recover it. 00:29:10.582 [2024-07-26 11:17:30.054189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.582 [2024-07-26 11:17:30.054205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.582 qpair failed and we were unable to recover it. 00:29:10.582 [2024-07-26 11:17:30.054706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.582 [2024-07-26 11:17:30.054722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.582 qpair failed and we were unable to recover it. 00:29:10.582 [2024-07-26 11:17:30.055230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.582 [2024-07-26 11:17:30.055246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.582 qpair failed and we were unable to recover it. 00:29:10.851 [2024-07-26 11:17:30.055764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.851 [2024-07-26 11:17:30.055783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.851 qpair failed and we were unable to recover it. 00:29:10.851 [2024-07-26 11:17:30.056364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.851 [2024-07-26 11:17:30.056382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.851 qpair failed and we were unable to recover it. 00:29:10.851 [2024-07-26 11:17:30.056840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.851 [2024-07-26 11:17:30.056857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.851 qpair failed and we were unable to recover it. 00:29:10.851 [2024-07-26 11:17:30.057370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.851 [2024-07-26 11:17:30.057387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.851 qpair failed and we were unable to recover it. 00:29:10.851 [2024-07-26 11:17:30.057930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.851 [2024-07-26 11:17:30.057946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.851 qpair failed and we were unable to recover it. 00:29:10.851 [2024-07-26 11:17:30.058420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.851 [2024-07-26 11:17:30.058440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.851 qpair failed and we were unable to recover it. 00:29:10.851 [2024-07-26 11:17:30.058899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.851 [2024-07-26 11:17:30.058920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.851 qpair failed and we were unable to recover it. 00:29:10.851 [2024-07-26 11:17:30.059431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.851 [2024-07-26 11:17:30.059448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.851 qpair failed and we were unable to recover it. 00:29:10.851 [2024-07-26 11:17:30.059986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.851 [2024-07-26 11:17:30.060002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.851 qpair failed and we were unable to recover it. 00:29:10.851 [2024-07-26 11:17:30.060453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.851 [2024-07-26 11:17:30.060470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.851 qpair failed and we were unable to recover it. 00:29:10.851 [2024-07-26 11:17:30.060942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.851 [2024-07-26 11:17:30.060961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.851 qpair failed and we were unable to recover it. 00:29:10.851 [2024-07-26 11:17:30.061417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.851 [2024-07-26 11:17:30.061434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.851 qpair failed and we were unable to recover it. 00:29:10.851 [2024-07-26 11:17:30.061831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.851 [2024-07-26 11:17:30.061847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.851 qpair failed and we were unable to recover it. 00:29:10.851 [2024-07-26 11:17:30.062249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.851 [2024-07-26 11:17:30.062264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.851 qpair failed and we were unable to recover it. 00:29:10.851 [2024-07-26 11:17:30.062775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.851 [2024-07-26 11:17:30.062791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.851 qpair failed and we were unable to recover it. 00:29:10.851 [2024-07-26 11:17:30.063242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.851 [2024-07-26 11:17:30.063259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.851 qpair failed and we were unable to recover it. 00:29:10.851 [2024-07-26 11:17:30.063739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.851 [2024-07-26 11:17:30.063756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.851 qpair failed and we were unable to recover it. 00:29:10.851 [2024-07-26 11:17:30.064236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.852 [2024-07-26 11:17:30.064253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.852 qpair failed and we were unable to recover it. 00:29:10.852 [2024-07-26 11:17:30.064719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.852 [2024-07-26 11:17:30.064735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.852 qpair failed and we were unable to recover it. 00:29:10.852 [2024-07-26 11:17:30.065249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.852 [2024-07-26 11:17:30.065266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.852 qpair failed and we were unable to recover it. 00:29:10.852 [2024-07-26 11:17:30.065773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.852 [2024-07-26 11:17:30.065789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.852 qpair failed and we were unable to recover it. 00:29:10.852 [2024-07-26 11:17:30.066246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.852 [2024-07-26 11:17:30.066264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.852 qpair failed and we were unable to recover it. 00:29:10.852 [2024-07-26 11:17:30.066801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.852 [2024-07-26 11:17:30.066818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.852 qpair failed and we were unable to recover it. 00:29:10.852 [2024-07-26 11:17:30.067273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.852 [2024-07-26 11:17:30.067290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.852 qpair failed and we were unable to recover it. 00:29:10.852 [2024-07-26 11:17:30.067684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.852 [2024-07-26 11:17:30.067700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.852 qpair failed and we were unable to recover it. 00:29:10.852 [2024-07-26 11:17:30.068231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.852 [2024-07-26 11:17:30.068249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.852 qpair failed and we were unable to recover it. 00:29:10.852 [2024-07-26 11:17:30.068714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.852 [2024-07-26 11:17:30.068730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.852 qpair failed and we were unable to recover it. 00:29:10.852 [2024-07-26 11:17:30.069240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.852 [2024-07-26 11:17:30.069257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.852 qpair failed and we were unable to recover it. 00:29:10.852 [2024-07-26 11:17:30.069743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.852 [2024-07-26 11:17:30.069759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.852 qpair failed and we were unable to recover it. 00:29:10.852 [2024-07-26 11:17:30.070157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.852 [2024-07-26 11:17:30.070173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.852 qpair failed and we were unable to recover it. 00:29:10.852 [2024-07-26 11:17:30.070629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.852 [2024-07-26 11:17:30.070644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.852 qpair failed and we were unable to recover it. 00:29:10.852 [2024-07-26 11:17:30.071173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.852 [2024-07-26 11:17:30.071190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.852 qpair failed and we were unable to recover it. 00:29:10.852 [2024-07-26 11:17:30.071726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.852 [2024-07-26 11:17:30.071742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.852 qpair failed and we were unable to recover it. 00:29:10.852 [2024-07-26 11:17:30.072223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.852 [2024-07-26 11:17:30.072239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.852 qpair failed and we were unable to recover it. 00:29:10.852 [2024-07-26 11:17:30.072795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.852 [2024-07-26 11:17:30.072811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.852 qpair failed and we were unable to recover it. 00:29:10.852 [2024-07-26 11:17:30.073275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.852 [2024-07-26 11:17:30.073292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.852 qpair failed and we were unable to recover it. 00:29:10.852 [2024-07-26 11:17:30.073827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.852 [2024-07-26 11:17:30.073848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.852 qpair failed and we were unable to recover it. 00:29:10.852 [2024-07-26 11:17:30.074394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.852 [2024-07-26 11:17:30.074449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.852 qpair failed and we were unable to recover it. 00:29:10.852 [2024-07-26 11:17:30.075073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.852 [2024-07-26 11:17:30.075099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.852 qpair failed and we were unable to recover it. 00:29:10.852 [2024-07-26 11:17:30.075802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.852 [2024-07-26 11:17:30.075894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.852 qpair failed and we were unable to recover it. 00:29:10.852 [2024-07-26 11:17:30.076635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.852 [2024-07-26 11:17:30.076657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.852 qpair failed and we were unable to recover it. 00:29:10.852 [2024-07-26 11:17:30.077022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.852 [2024-07-26 11:17:30.077038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.852 qpair failed and we were unable to recover it. 00:29:10.852 [2024-07-26 11:17:30.077531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.852 [2024-07-26 11:17:30.077547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.852 qpair failed and we were unable to recover it. 00:29:10.852 [2024-07-26 11:17:30.078063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.852 [2024-07-26 11:17:30.078081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.852 qpair failed and we were unable to recover it. 00:29:10.852 [2024-07-26 11:17:30.078627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.852 [2024-07-26 11:17:30.078644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.852 qpair failed and we were unable to recover it. 00:29:10.852 [2024-07-26 11:17:30.079169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.852 [2024-07-26 11:17:30.079186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.852 qpair failed and we were unable to recover it. 00:29:10.852 [2024-07-26 11:17:30.079766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.852 [2024-07-26 11:17:30.079782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.852 qpair failed and we were unable to recover it. 00:29:10.852 [2024-07-26 11:17:30.080321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.852 [2024-07-26 11:17:30.080337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.852 qpair failed and we were unable to recover it. 00:29:10.852 [2024-07-26 11:17:30.080763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.852 [2024-07-26 11:17:30.080778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.852 qpair failed and we were unable to recover it. 00:29:10.852 [2024-07-26 11:17:30.081321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.852 [2024-07-26 11:17:30.081337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.852 qpair failed and we were unable to recover it. 00:29:10.852 [2024-07-26 11:17:30.081835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.852 [2024-07-26 11:17:30.081850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.852 qpair failed and we were unable to recover it. 00:29:10.852 [2024-07-26 11:17:30.082359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.852 [2024-07-26 11:17:30.082376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.852 qpair failed and we were unable to recover it. 00:29:10.852 [2024-07-26 11:17:30.082900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.852 [2024-07-26 11:17:30.082915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.852 qpair failed and we were unable to recover it. 00:29:10.852 [2024-07-26 11:17:30.083459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.852 [2024-07-26 11:17:30.083476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.852 qpair failed and we were unable to recover it. 00:29:10.852 [2024-07-26 11:17:30.083969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.852 [2024-07-26 11:17:30.083985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.852 qpair failed and we were unable to recover it. 00:29:10.853 [2024-07-26 11:17:30.084542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.853 [2024-07-26 11:17:30.084557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.853 qpair failed and we were unable to recover it. 00:29:10.853 [2024-07-26 11:17:30.085055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.853 [2024-07-26 11:17:30.085070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.853 qpair failed and we were unable to recover it. 00:29:10.853 [2024-07-26 11:17:30.085335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.853 [2024-07-26 11:17:30.085352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.853 qpair failed and we were unable to recover it. 00:29:10.853 [2024-07-26 11:17:30.085834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.853 [2024-07-26 11:17:30.085850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.853 qpair failed and we were unable to recover it. 00:29:10.853 [2024-07-26 11:17:30.086355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.853 [2024-07-26 11:17:30.086372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.853 qpair failed and we were unable to recover it. 00:29:10.853 [2024-07-26 11:17:30.086900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.853 [2024-07-26 11:17:30.086915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.853 qpair failed and we were unable to recover it. 00:29:10.853 [2024-07-26 11:17:30.087439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.853 [2024-07-26 11:17:30.087456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.853 qpair failed and we were unable to recover it. 00:29:10.853 [2024-07-26 11:17:30.087917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.853 [2024-07-26 11:17:30.087932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.853 qpair failed and we were unable to recover it. 00:29:10.853 [2024-07-26 11:17:30.088470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.853 [2024-07-26 11:17:30.088486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.853 qpair failed and we were unable to recover it. 00:29:10.853 [2024-07-26 11:17:30.089069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.853 [2024-07-26 11:17:30.089085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.853 qpair failed and we were unable to recover it. 00:29:10.853 [2024-07-26 11:17:30.089635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.853 [2024-07-26 11:17:30.089651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.853 qpair failed and we were unable to recover it. 00:29:10.853 [2024-07-26 11:17:30.090156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.853 [2024-07-26 11:17:30.090172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.853 qpair failed and we were unable to recover it. 00:29:10.853 [2024-07-26 11:17:30.090705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.853 [2024-07-26 11:17:30.090721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.853 qpair failed and we were unable to recover it. 00:29:10.853 [2024-07-26 11:17:30.091210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.853 [2024-07-26 11:17:30.091226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.853 qpair failed and we were unable to recover it. 00:29:10.853 [2024-07-26 11:17:30.091690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.853 [2024-07-26 11:17:30.091706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.853 qpair failed and we were unable to recover it. 00:29:10.853 [2024-07-26 11:17:30.092156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.853 [2024-07-26 11:17:30.092172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.853 qpair failed and we were unable to recover it. 00:29:10.853 [2024-07-26 11:17:30.092698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.853 [2024-07-26 11:17:30.092713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.853 qpair failed and we were unable to recover it. 00:29:10.853 [2024-07-26 11:17:30.093297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.853 [2024-07-26 11:17:30.093316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.853 qpair failed and we were unable to recover it. 00:29:10.853 [2024-07-26 11:17:30.093801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.853 [2024-07-26 11:17:30.093817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.853 qpair failed and we were unable to recover it. 00:29:10.853 [2024-07-26 11:17:30.094323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.853 [2024-07-26 11:17:30.094339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.853 qpair failed and we were unable to recover it. 00:29:10.853 [2024-07-26 11:17:30.094808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.853 [2024-07-26 11:17:30.094823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.853 qpair failed and we were unable to recover it. 00:29:10.853 [2024-07-26 11:17:30.095279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.853 [2024-07-26 11:17:30.095296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.853 qpair failed and we were unable to recover it. 00:29:10.853 [2024-07-26 11:17:30.095775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.853 [2024-07-26 11:17:30.095791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.853 qpair failed and we were unable to recover it. 00:29:10.853 [2024-07-26 11:17:30.096349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.853 [2024-07-26 11:17:30.096365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.853 qpair failed and we were unable to recover it. 00:29:10.853 [2024-07-26 11:17:30.096811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.853 [2024-07-26 11:17:30.096827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.853 qpair failed and we were unable to recover it. 00:29:10.853 [2024-07-26 11:17:30.097338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.853 [2024-07-26 11:17:30.097354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.853 qpair failed and we were unable to recover it. 00:29:10.853 [2024-07-26 11:17:30.097904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.853 [2024-07-26 11:17:30.097919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.853 qpair failed and we were unable to recover it. 00:29:10.853 [2024-07-26 11:17:30.098373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.853 [2024-07-26 11:17:30.098390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.853 qpair failed and we were unable to recover it. 00:29:10.853 [2024-07-26 11:17:30.098798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.853 [2024-07-26 11:17:30.098815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.853 qpair failed and we were unable to recover it. 00:29:10.853 [2024-07-26 11:17:30.099310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.853 [2024-07-26 11:17:30.099328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.853 qpair failed and we were unable to recover it. 00:29:10.853 [2024-07-26 11:17:30.099856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.853 [2024-07-26 11:17:30.099872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.853 qpair failed and we were unable to recover it. 00:29:10.853 [2024-07-26 11:17:30.100398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.853 [2024-07-26 11:17:30.100414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.853 qpair failed and we were unable to recover it. 00:29:10.853 [2024-07-26 11:17:30.100936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.853 [2024-07-26 11:17:30.100952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.853 qpair failed and we were unable to recover it. 00:29:10.853 [2024-07-26 11:17:30.101396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.853 [2024-07-26 11:17:30.101428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.853 qpair failed and we were unable to recover it. 00:29:10.853 [2024-07-26 11:17:30.101946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.853 [2024-07-26 11:17:30.101978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.853 qpair failed and we were unable to recover it. 00:29:10.853 [2024-07-26 11:17:30.102456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.853 [2024-07-26 11:17:30.102489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.853 qpair failed and we were unable to recover it. 00:29:10.853 [2024-07-26 11:17:30.103026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.853 [2024-07-26 11:17:30.103098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.853 qpair failed and we were unable to recover it. 00:29:10.853 [2024-07-26 11:17:30.103637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.854 [2024-07-26 11:17:30.103669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.854 qpair failed and we were unable to recover it. 00:29:10.854 [2024-07-26 11:17:30.104225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.854 [2024-07-26 11:17:30.104258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.854 qpair failed and we were unable to recover it. 00:29:10.854 [2024-07-26 11:17:30.104829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.854 [2024-07-26 11:17:30.104861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.854 qpair failed and we were unable to recover it. 00:29:10.854 [2024-07-26 11:17:30.105376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.854 [2024-07-26 11:17:30.105408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.854 qpair failed and we were unable to recover it. 00:29:10.854 [2024-07-26 11:17:30.105901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.854 [2024-07-26 11:17:30.105935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.854 qpair failed and we were unable to recover it. 00:29:10.854 [2024-07-26 11:17:30.106546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.854 [2024-07-26 11:17:30.106579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.854 qpair failed and we were unable to recover it. 00:29:10.854 [2024-07-26 11:17:30.107197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.854 [2024-07-26 11:17:30.107240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.854 qpair failed and we were unable to recover it. 00:29:10.854 [2024-07-26 11:17:30.107789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.854 [2024-07-26 11:17:30.107821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.854 qpair failed and we were unable to recover it. 00:29:10.854 [2024-07-26 11:17:30.108393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.854 [2024-07-26 11:17:30.108426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.854 qpair failed and we were unable to recover it. 00:29:10.854 [2024-07-26 11:17:30.109039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.854 [2024-07-26 11:17:30.109080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.854 qpair failed and we were unable to recover it. 00:29:10.854 [2024-07-26 11:17:30.109683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.854 [2024-07-26 11:17:30.109714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.854 qpair failed and we were unable to recover it. 00:29:10.854 [2024-07-26 11:17:30.110308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.854 [2024-07-26 11:17:30.110340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.854 qpair failed and we were unable to recover it. 00:29:10.854 [2024-07-26 11:17:30.110914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.854 [2024-07-26 11:17:30.110946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.854 qpair failed and we were unable to recover it. 00:29:10.854 [2024-07-26 11:17:30.111496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.854 [2024-07-26 11:17:30.111531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.854 qpair failed and we were unable to recover it. 00:29:10.854 [2024-07-26 11:17:30.112060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.854 [2024-07-26 11:17:30.112094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.854 qpair failed and we were unable to recover it. 00:29:10.854 [2024-07-26 11:17:30.112652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.854 [2024-07-26 11:17:30.112684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.854 qpair failed and we were unable to recover it. 00:29:10.854 [2024-07-26 11:17:30.113201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.854 [2024-07-26 11:17:30.113234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.854 qpair failed and we were unable to recover it. 00:29:10.854 [2024-07-26 11:17:30.113780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.854 [2024-07-26 11:17:30.113811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.854 qpair failed and we were unable to recover it. 00:29:10.854 [2024-07-26 11:17:30.114373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.854 [2024-07-26 11:17:30.114406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.854 qpair failed and we were unable to recover it. 00:29:10.854 [2024-07-26 11:17:30.114990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.854 [2024-07-26 11:17:30.115022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.854 qpair failed and we were unable to recover it. 00:29:10.854 [2024-07-26 11:17:30.115633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.854 [2024-07-26 11:17:30.115673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.854 qpair failed and we were unable to recover it. 00:29:10.854 [2024-07-26 11:17:30.116268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.854 [2024-07-26 11:17:30.116303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.854 qpair failed and we were unable to recover it. 00:29:10.854 [2024-07-26 11:17:30.116854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.854 [2024-07-26 11:17:30.116887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.854 qpair failed and we were unable to recover it. 00:29:10.854 [2024-07-26 11:17:30.117475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.854 [2024-07-26 11:17:30.117491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.854 qpair failed and we were unable to recover it. 00:29:10.854 [2024-07-26 11:17:30.118060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.854 [2024-07-26 11:17:30.118092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.854 qpair failed and we were unable to recover it. 00:29:10.854 [2024-07-26 11:17:30.118608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.854 [2024-07-26 11:17:30.118639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.854 qpair failed and we were unable to recover it. 00:29:10.854 [2024-07-26 11:17:30.119213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.854 [2024-07-26 11:17:30.119246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.854 qpair failed and we were unable to recover it. 00:29:10.854 [2024-07-26 11:17:30.119811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.854 [2024-07-26 11:17:30.119843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.854 qpair failed and we were unable to recover it. 00:29:10.854 [2024-07-26 11:17:30.120380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.854 [2024-07-26 11:17:30.120415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.854 qpair failed and we were unable to recover it. 00:29:10.854 [2024-07-26 11:17:30.120973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.854 [2024-07-26 11:17:30.121013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.854 qpair failed and we were unable to recover it. 00:29:10.854 [2024-07-26 11:17:30.121725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.854 [2024-07-26 11:17:30.121807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.854 qpair failed and we were unable to recover it. 00:29:10.854 [2024-07-26 11:17:30.122381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.854 [2024-07-26 11:17:30.122421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.854 qpair failed and we were unable to recover it. 00:29:10.854 [2024-07-26 11:17:30.123064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.854 [2024-07-26 11:17:30.123097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.854 qpair failed and we were unable to recover it. 00:29:10.854 [2024-07-26 11:17:30.123678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.854 [2024-07-26 11:17:30.123711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.854 qpair failed and we were unable to recover it. 00:29:10.854 [2024-07-26 11:17:30.124303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.854 [2024-07-26 11:17:30.124339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.854 qpair failed and we were unable to recover it. 00:29:10.854 [2024-07-26 11:17:30.124948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.854 [2024-07-26 11:17:30.124981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.854 qpair failed and we were unable to recover it. 00:29:10.854 [2024-07-26 11:17:30.125563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.854 [2024-07-26 11:17:30.125596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.854 qpair failed and we were unable to recover it. 00:29:10.854 [2024-07-26 11:17:30.126173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.854 [2024-07-26 11:17:30.126205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.855 qpair failed and we were unable to recover it. 00:29:10.855 [2024-07-26 11:17:30.126781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.855 [2024-07-26 11:17:30.126813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.855 qpair failed and we were unable to recover it. 00:29:10.855 [2024-07-26 11:17:30.127315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.855 [2024-07-26 11:17:30.127348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.855 qpair failed and we were unable to recover it. 00:29:10.855 [2024-07-26 11:17:30.127891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.855 [2024-07-26 11:17:30.127922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.855 qpair failed and we were unable to recover it. 00:29:10.855 [2024-07-26 11:17:30.128540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.855 [2024-07-26 11:17:30.128576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.855 qpair failed and we were unable to recover it. 00:29:10.855 [2024-07-26 11:17:30.129178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.855 [2024-07-26 11:17:30.129212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.855 qpair failed and we were unable to recover it. 00:29:10.855 [2024-07-26 11:17:30.129808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.855 [2024-07-26 11:17:30.129839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.855 qpair failed and we were unable to recover it. 00:29:10.855 [2024-07-26 11:17:30.130348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.855 [2024-07-26 11:17:30.130380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.855 qpair failed and we were unable to recover it. 00:29:10.855 [2024-07-26 11:17:30.130977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.855 [2024-07-26 11:17:30.131008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.855 qpair failed and we were unable to recover it. 00:29:10.855 [2024-07-26 11:17:30.131605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.855 [2024-07-26 11:17:30.131639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.855 qpair failed and we were unable to recover it. 00:29:10.855 [2024-07-26 11:17:30.132180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.855 [2024-07-26 11:17:30.132214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.855 qpair failed and we were unable to recover it. 00:29:10.855 [2024-07-26 11:17:30.132778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.855 [2024-07-26 11:17:30.132810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.855 qpair failed and we were unable to recover it. 00:29:10.855 [2024-07-26 11:17:30.133387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.855 [2024-07-26 11:17:30.133403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.855 qpair failed and we were unable to recover it. 00:29:10.855 [2024-07-26 11:17:30.133960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.855 [2024-07-26 11:17:30.133992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.855 qpair failed and we were unable to recover it. 00:29:10.855 [2024-07-26 11:17:30.134590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.855 [2024-07-26 11:17:30.134623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.855 qpair failed and we were unable to recover it. 00:29:10.855 [2024-07-26 11:17:30.135233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.855 [2024-07-26 11:17:30.135268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.855 qpair failed and we were unable to recover it. 00:29:10.855 [2024-07-26 11:17:30.135820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.855 [2024-07-26 11:17:30.135851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.855 qpair failed and we were unable to recover it. 00:29:10.855 [2024-07-26 11:17:30.136401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.855 [2024-07-26 11:17:30.136434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.855 qpair failed and we were unable to recover it. 00:29:10.855 [2024-07-26 11:17:30.137003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.855 [2024-07-26 11:17:30.137035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.855 qpair failed and we were unable to recover it. 00:29:10.855 [2024-07-26 11:17:30.137636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.855 [2024-07-26 11:17:30.137668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.855 qpair failed and we were unable to recover it. 00:29:10.855 [2024-07-26 11:17:30.138268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.855 [2024-07-26 11:17:30.138300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.855 qpair failed and we were unable to recover it. 00:29:10.855 [2024-07-26 11:17:30.138816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.855 [2024-07-26 11:17:30.138847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.855 qpair failed and we were unable to recover it. 00:29:10.855 [2024-07-26 11:17:30.139440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.855 [2024-07-26 11:17:30.139475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.855 qpair failed and we were unable to recover it. 00:29:10.855 [2024-07-26 11:17:30.140052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.855 [2024-07-26 11:17:30.140072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.855 qpair failed and we were unable to recover it. 00:29:10.855 [2024-07-26 11:17:30.140540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.855 [2024-07-26 11:17:30.140571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.855 qpair failed and we were unable to recover it. 00:29:10.855 [2024-07-26 11:17:30.141066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.855 [2024-07-26 11:17:30.141099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.855 qpair failed and we were unable to recover it. 00:29:10.855 [2024-07-26 11:17:30.141726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.855 [2024-07-26 11:17:30.141757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.855 qpair failed and we were unable to recover it. 00:29:10.855 [2024-07-26 11:17:30.142329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.855 [2024-07-26 11:17:30.142361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.855 qpair failed and we were unable to recover it. 00:29:10.855 [2024-07-26 11:17:30.142900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.855 [2024-07-26 11:17:30.142931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.855 qpair failed and we were unable to recover it. 00:29:10.855 [2024-07-26 11:17:30.143429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.855 [2024-07-26 11:17:30.143464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.855 qpair failed and we were unable to recover it. 00:29:10.855 [2024-07-26 11:17:30.144064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.855 [2024-07-26 11:17:30.144096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.855 qpair failed and we were unable to recover it. 00:29:10.855 [2024-07-26 11:17:30.144692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.855 [2024-07-26 11:17:30.144724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.855 qpair failed and we were unable to recover it. 00:29:10.855 [2024-07-26 11:17:30.145299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.855 [2024-07-26 11:17:30.145332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.855 qpair failed and we were unable to recover it. 00:29:10.855 [2024-07-26 11:17:30.145902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.855 [2024-07-26 11:17:30.145934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.855 qpair failed and we were unable to recover it. 00:29:10.855 [2024-07-26 11:17:30.146386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.855 [2024-07-26 11:17:30.146419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.855 qpair failed and we were unable to recover it. 00:29:10.855 [2024-07-26 11:17:30.146990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.855 [2024-07-26 11:17:30.147021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.855 qpair failed and we were unable to recover it. 00:29:10.855 [2024-07-26 11:17:30.147545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.855 [2024-07-26 11:17:30.147579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.855 qpair failed and we were unable to recover it. 00:29:10.855 [2024-07-26 11:17:30.148188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.855 [2024-07-26 11:17:30.148221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.855 qpair failed and we were unable to recover it. 00:29:10.856 [2024-07-26 11:17:30.148800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.856 [2024-07-26 11:17:30.148831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.856 qpair failed and we were unable to recover it. 00:29:10.856 [2024-07-26 11:17:30.149450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.856 [2024-07-26 11:17:30.149483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.856 qpair failed and we were unable to recover it. 00:29:10.856 [2024-07-26 11:17:30.150113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.856 [2024-07-26 11:17:30.150129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.856 qpair failed and we were unable to recover it. 00:29:10.856 [2024-07-26 11:17:30.150656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.856 [2024-07-26 11:17:30.150689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.856 qpair failed and we were unable to recover it. 00:29:10.856 [2024-07-26 11:17:30.151268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.856 [2024-07-26 11:17:30.151303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.856 qpair failed and we were unable to recover it. 00:29:10.856 [2024-07-26 11:17:30.151889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.856 [2024-07-26 11:17:30.151922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.856 qpair failed and we were unable to recover it. 00:29:10.856 [2024-07-26 11:17:30.152445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.856 [2024-07-26 11:17:30.152478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.856 qpair failed and we were unable to recover it. 00:29:10.856 [2024-07-26 11:17:30.153057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.856 [2024-07-26 11:17:30.153094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.856 qpair failed and we were unable to recover it. 00:29:10.856 [2024-07-26 11:17:30.153587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.856 [2024-07-26 11:17:30.153619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.856 qpair failed and we were unable to recover it. 00:29:10.856 [2024-07-26 11:17:30.154116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.856 [2024-07-26 11:17:30.154149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.856 qpair failed and we were unable to recover it. 00:29:10.856 [2024-07-26 11:17:30.154738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.856 [2024-07-26 11:17:30.154770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.856 qpair failed and we were unable to recover it. 00:29:10.856 [2024-07-26 11:17:30.155387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.856 [2024-07-26 11:17:30.155421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.856 qpair failed and we were unable to recover it. 00:29:10.856 [2024-07-26 11:17:30.156003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.856 [2024-07-26 11:17:30.156036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.856 qpair failed and we were unable to recover it. 00:29:10.856 [2024-07-26 11:17:30.156666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.856 [2024-07-26 11:17:30.156698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.856 qpair failed and we were unable to recover it. 00:29:10.856 [2024-07-26 11:17:30.157223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.856 [2024-07-26 11:17:30.157256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.856 qpair failed and we were unable to recover it. 00:29:10.856 [2024-07-26 11:17:30.157748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.856 [2024-07-26 11:17:30.157780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.856 qpair failed and we were unable to recover it. 00:29:10.856 [2024-07-26 11:17:30.158309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.856 [2024-07-26 11:17:30.158343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.856 qpair failed and we were unable to recover it. 00:29:10.856 [2024-07-26 11:17:30.158893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.856 [2024-07-26 11:17:30.158925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.856 qpair failed and we were unable to recover it. 00:29:10.856 [2024-07-26 11:17:30.159443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.856 [2024-07-26 11:17:30.159479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.856 qpair failed and we were unable to recover it. 00:29:10.856 [2024-07-26 11:17:30.160074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.856 [2024-07-26 11:17:30.160108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.856 qpair failed and we were unable to recover it. 00:29:10.856 [2024-07-26 11:17:30.160688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.856 [2024-07-26 11:17:30.160720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.856 qpair failed and we were unable to recover it. 00:29:10.856 [2024-07-26 11:17:30.161325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.856 [2024-07-26 11:17:30.161358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.856 qpair failed and we were unable to recover it. 00:29:10.856 [2024-07-26 11:17:30.161960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.856 [2024-07-26 11:17:30.161992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.856 qpair failed and we were unable to recover it. 00:29:10.856 [2024-07-26 11:17:30.162510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.856 [2024-07-26 11:17:30.162542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.856 qpair failed and we were unable to recover it. 00:29:10.856 [2024-07-26 11:17:30.163105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.856 [2024-07-26 11:17:30.163140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.856 qpair failed and we were unable to recover it. 00:29:10.856 [2024-07-26 11:17:30.163723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.856 [2024-07-26 11:17:30.163762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.856 qpair failed and we were unable to recover it. 00:29:10.856 [2024-07-26 11:17:30.164365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.856 [2024-07-26 11:17:30.164399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.856 qpair failed and we were unable to recover it. 00:29:10.856 [2024-07-26 11:17:30.164994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.856 [2024-07-26 11:17:30.165027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.856 qpair failed and we were unable to recover it. 00:29:10.856 [2024-07-26 11:17:30.165625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.856 [2024-07-26 11:17:30.165658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.856 qpair failed and we were unable to recover it. 00:29:10.856 [2024-07-26 11:17:30.166238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.857 [2024-07-26 11:17:30.166270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.857 qpair failed and we were unable to recover it. 00:29:10.857 [2024-07-26 11:17:30.166859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.857 [2024-07-26 11:17:30.166891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.857 qpair failed and we were unable to recover it. 00:29:10.857 [2024-07-26 11:17:30.167473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.857 [2024-07-26 11:17:30.167508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.857 qpair failed and we were unable to recover it. 00:29:10.857 [2024-07-26 11:17:30.168118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.857 [2024-07-26 11:17:30.168150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.857 qpair failed and we were unable to recover it. 00:29:10.857 [2024-07-26 11:17:30.168732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.857 [2024-07-26 11:17:30.168763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.857 qpair failed and we were unable to recover it. 00:29:10.857 [2024-07-26 11:17:30.169332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.857 [2024-07-26 11:17:30.169365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.857 qpair failed and we were unable to recover it. 00:29:10.857 [2024-07-26 11:17:30.169955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.857 [2024-07-26 11:17:30.169986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.857 qpair failed and we were unable to recover it. 00:29:10.857 [2024-07-26 11:17:30.170599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.857 [2024-07-26 11:17:30.170616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.857 qpair failed and we were unable to recover it. 00:29:10.857 [2024-07-26 11:17:30.171163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.857 [2024-07-26 11:17:30.171198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.857 qpair failed and we were unable to recover it. 00:29:10.857 [2024-07-26 11:17:30.171766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.857 [2024-07-26 11:17:30.171798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.857 qpair failed and we were unable to recover it. 00:29:10.857 [2024-07-26 11:17:30.172449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.857 [2024-07-26 11:17:30.172483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.857 qpair failed and we were unable to recover it. 00:29:10.857 [2024-07-26 11:17:30.173079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.857 [2024-07-26 11:17:30.173112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.857 qpair failed and we were unable to recover it. 00:29:10.857 [2024-07-26 11:17:30.173704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.857 [2024-07-26 11:17:30.173736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.857 qpair failed and we were unable to recover it. 00:29:10.857 [2024-07-26 11:17:30.174245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.857 [2024-07-26 11:17:30.174277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.857 qpair failed and we were unable to recover it. 00:29:10.857 [2024-07-26 11:17:30.174834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.857 [2024-07-26 11:17:30.174865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.857 qpair failed and we were unable to recover it. 00:29:10.857 [2024-07-26 11:17:30.175465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.857 [2024-07-26 11:17:30.175500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.857 qpair failed and we were unable to recover it. 00:29:10.857 [2024-07-26 11:17:30.176120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.857 [2024-07-26 11:17:30.176154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.857 qpair failed and we were unable to recover it. 00:29:10.857 [2024-07-26 11:17:30.176708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.857 [2024-07-26 11:17:30.176740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.857 qpair failed and we were unable to recover it. 00:29:10.857 [2024-07-26 11:17:30.177309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.857 [2024-07-26 11:17:30.177342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.857 qpair failed and we were unable to recover it. 00:29:10.857 [2024-07-26 11:17:30.177848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.857 [2024-07-26 11:17:30.177880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.857 qpair failed and we were unable to recover it. 00:29:10.857 [2024-07-26 11:17:30.178448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.857 [2024-07-26 11:17:30.178464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.857 qpair failed and we were unable to recover it. 00:29:10.857 [2024-07-26 11:17:30.179028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.857 [2024-07-26 11:17:30.179052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.857 qpair failed and we were unable to recover it. 00:29:10.857 [2024-07-26 11:17:30.179622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.857 [2024-07-26 11:17:30.179655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.857 qpair failed and we were unable to recover it. 00:29:10.857 [2024-07-26 11:17:30.180164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.857 [2024-07-26 11:17:30.180181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.857 qpair failed and we were unable to recover it. 00:29:10.857 [2024-07-26 11:17:30.180719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.857 [2024-07-26 11:17:30.180735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.857 qpair failed and we were unable to recover it. 00:29:10.857 [2024-07-26 11:17:30.181278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.857 [2024-07-26 11:17:30.181311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.857 qpair failed and we were unable to recover it. 00:29:10.857 [2024-07-26 11:17:30.181934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.857 [2024-07-26 11:17:30.181965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.857 qpair failed and we were unable to recover it. 00:29:10.857 [2024-07-26 11:17:30.182575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.857 [2024-07-26 11:17:30.182607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.857 qpair failed and we were unable to recover it. 00:29:10.857 [2024-07-26 11:17:30.183106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.857 [2024-07-26 11:17:30.183141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.857 qpair failed and we were unable to recover it. 00:29:10.857 [2024-07-26 11:17:30.183658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.857 [2024-07-26 11:17:30.183695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.857 qpair failed and we were unable to recover it. 00:29:10.857 [2024-07-26 11:17:30.184208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.857 [2024-07-26 11:17:30.184225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.857 qpair failed and we were unable to recover it. 00:29:10.857 [2024-07-26 11:17:30.184782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.857 [2024-07-26 11:17:30.184814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.857 qpair failed and we were unable to recover it. 00:29:10.857 [2024-07-26 11:17:30.185326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.857 [2024-07-26 11:17:30.185360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.857 qpair failed and we were unable to recover it. 00:29:10.857 [2024-07-26 11:17:30.185931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.857 [2024-07-26 11:17:30.185963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.857 qpair failed and we were unable to recover it. 00:29:10.857 [2024-07-26 11:17:30.186537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.857 [2024-07-26 11:17:30.186570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.857 qpair failed and we were unable to recover it. 00:29:10.857 [2024-07-26 11:17:30.187145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.857 [2024-07-26 11:17:30.187181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.857 qpair failed and we were unable to recover it. 00:29:10.857 [2024-07-26 11:17:30.187779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.857 [2024-07-26 11:17:30.187817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.857 qpair failed and we were unable to recover it. 00:29:10.857 [2024-07-26 11:17:30.188345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.857 [2024-07-26 11:17:30.188379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.858 qpair failed and we were unable to recover it. 00:29:10.858 [2024-07-26 11:17:30.188957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.858 [2024-07-26 11:17:30.188989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.858 qpair failed and we were unable to recover it. 00:29:10.858 [2024-07-26 11:17:30.189589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.858 [2024-07-26 11:17:30.189621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.858 qpair failed and we were unable to recover it. 00:29:10.858 [2024-07-26 11:17:30.190194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.858 [2024-07-26 11:17:30.190227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.858 qpair failed and we were unable to recover it. 00:29:10.858 [2024-07-26 11:17:30.190743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.858 [2024-07-26 11:17:30.190775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.858 qpair failed and we were unable to recover it. 00:29:10.858 [2024-07-26 11:17:30.191275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.858 [2024-07-26 11:17:30.191310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.858 qpair failed and we were unable to recover it. 00:29:10.858 [2024-07-26 11:17:30.191830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.858 [2024-07-26 11:17:30.191862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.858 qpair failed and we were unable to recover it. 00:29:10.858 [2024-07-26 11:17:30.192438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.858 [2024-07-26 11:17:30.192471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.858 qpair failed and we were unable to recover it. 00:29:10.858 [2024-07-26 11:17:30.193069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.858 [2024-07-26 11:17:30.193102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.858 qpair failed and we were unable to recover it. 00:29:10.858 [2024-07-26 11:17:30.193685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.858 [2024-07-26 11:17:30.193717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.858 qpair failed and we were unable to recover it. 00:29:10.858 [2024-07-26 11:17:30.194309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.858 [2024-07-26 11:17:30.194342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.858 qpair failed and we were unable to recover it. 00:29:10.858 [2024-07-26 11:17:30.194854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.858 [2024-07-26 11:17:30.194886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.858 qpair failed and we were unable to recover it. 00:29:10.858 [2024-07-26 11:17:30.195459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.858 [2024-07-26 11:17:30.195494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.858 qpair failed and we were unable to recover it. 00:29:10.858 [2024-07-26 11:17:30.196099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.858 [2024-07-26 11:17:30.196133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.858 qpair failed and we were unable to recover it. 00:29:10.858 [2024-07-26 11:17:30.196642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.858 [2024-07-26 11:17:30.196673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.858 qpair failed and we were unable to recover it. 00:29:10.858 [2024-07-26 11:17:30.197169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.858 [2024-07-26 11:17:30.197202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.858 qpair failed and we were unable to recover it. 00:29:10.858 [2024-07-26 11:17:30.197772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.858 [2024-07-26 11:17:30.197804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.858 qpair failed and we were unable to recover it. 00:29:10.858 [2024-07-26 11:17:30.198362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.858 [2024-07-26 11:17:30.198395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.858 qpair failed and we were unable to recover it. 00:29:10.858 [2024-07-26 11:17:30.198943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.858 [2024-07-26 11:17:30.198976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.858 qpair failed and we were unable to recover it. 00:29:10.858 [2024-07-26 11:17:30.199578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.858 [2024-07-26 11:17:30.199614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.858 qpair failed and we were unable to recover it. 00:29:10.858 [2024-07-26 11:17:30.200206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.858 [2024-07-26 11:17:30.200239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.858 qpair failed and we were unable to recover it. 00:29:10.858 [2024-07-26 11:17:30.200814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.858 [2024-07-26 11:17:30.200847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.858 qpair failed and we were unable to recover it. 00:29:10.858 [2024-07-26 11:17:30.201434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.858 [2024-07-26 11:17:30.201467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.858 qpair failed and we were unable to recover it. 00:29:10.858 [2024-07-26 11:17:30.202054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.858 [2024-07-26 11:17:30.202087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.858 qpair failed and we were unable to recover it. 00:29:10.858 [2024-07-26 11:17:30.202600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.858 [2024-07-26 11:17:30.202632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.858 qpair failed and we were unable to recover it. 00:29:10.858 [2024-07-26 11:17:30.203204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.858 [2024-07-26 11:17:30.203239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.858 qpair failed and we were unable to recover it. 00:29:10.858 [2024-07-26 11:17:30.203815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.858 [2024-07-26 11:17:30.203848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.858 qpair failed and we were unable to recover it. 00:29:10.858 [2024-07-26 11:17:30.204445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.858 [2024-07-26 11:17:30.204479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.858 qpair failed and we were unable to recover it. 00:29:10.858 [2024-07-26 11:17:30.204998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.858 [2024-07-26 11:17:30.205029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.858 qpair failed and we were unable to recover it. 00:29:10.858 [2024-07-26 11:17:30.205624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.858 [2024-07-26 11:17:30.205657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.858 qpair failed and we were unable to recover it. 00:29:10.858 [2024-07-26 11:17:30.206254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.858 [2024-07-26 11:17:30.206287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.858 qpair failed and we were unable to recover it. 00:29:10.858 [2024-07-26 11:17:30.206815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.858 [2024-07-26 11:17:30.206846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.858 qpair failed and we were unable to recover it. 00:29:10.858 [2024-07-26 11:17:30.207401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.858 [2024-07-26 11:17:30.207435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.858 qpair failed and we were unable to recover it. 00:29:10.858 [2024-07-26 11:17:30.208033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.858 [2024-07-26 11:17:30.208078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.859 qpair failed and we were unable to recover it. 00:29:10.859 [2024-07-26 11:17:30.208652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.859 [2024-07-26 11:17:30.208684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.859 qpair failed and we were unable to recover it. 00:29:10.859 [2024-07-26 11:17:30.209281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.859 [2024-07-26 11:17:30.209315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.859 qpair failed and we were unable to recover it. 00:29:10.859 [2024-07-26 11:17:30.209875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.859 [2024-07-26 11:17:30.209907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.859 qpair failed and we were unable to recover it. 00:29:10.859 [2024-07-26 11:17:30.210498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.859 [2024-07-26 11:17:30.210514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.859 qpair failed and we were unable to recover it. 00:29:10.859 [2024-07-26 11:17:30.211069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.859 [2024-07-26 11:17:30.211086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.859 qpair failed and we were unable to recover it. 00:29:10.859 [2024-07-26 11:17:30.211554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.859 [2024-07-26 11:17:30.211574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.859 qpair failed and we were unable to recover it. 00:29:10.859 [2024-07-26 11:17:30.212036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.859 [2024-07-26 11:17:30.212059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.859 qpair failed and we were unable to recover it. 00:29:10.859 [2024-07-26 11:17:30.212600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.859 [2024-07-26 11:17:30.212616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.859 qpair failed and we were unable to recover it. 00:29:10.859 [2024-07-26 11:17:30.213158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.859 [2024-07-26 11:17:30.213175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.859 qpair failed and we were unable to recover it. 00:29:10.859 [2024-07-26 11:17:30.213696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.859 [2024-07-26 11:17:30.213712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.859 qpair failed and we were unable to recover it. 00:29:10.859 [2024-07-26 11:17:30.214206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.859 [2024-07-26 11:17:30.214222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.859 qpair failed and we were unable to recover it. 00:29:10.859 [2024-07-26 11:17:30.214781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.859 [2024-07-26 11:17:30.214797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.859 qpair failed and we were unable to recover it. 00:29:10.859 [2024-07-26 11:17:30.215328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.859 [2024-07-26 11:17:30.215344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.859 qpair failed and we were unable to recover it. 00:29:10.859 [2024-07-26 11:17:30.215863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.859 [2024-07-26 11:17:30.215879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.859 qpair failed and we were unable to recover it. 00:29:10.859 [2024-07-26 11:17:30.216371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.859 [2024-07-26 11:17:30.216387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.859 qpair failed and we were unable to recover it. 00:29:10.859 [2024-07-26 11:17:30.216945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.859 [2024-07-26 11:17:30.216961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.859 qpair failed and we were unable to recover it. 00:29:10.859 [2024-07-26 11:17:30.217466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.859 [2024-07-26 11:17:30.217482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.859 qpair failed and we were unable to recover it. 00:29:10.859 [2024-07-26 11:17:30.217944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.859 [2024-07-26 11:17:30.217960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.859 qpair failed and we were unable to recover it. 00:29:10.859 [2024-07-26 11:17:30.218501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.859 [2024-07-26 11:17:30.218518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.859 qpair failed and we were unable to recover it. 00:29:10.859 [2024-07-26 11:17:30.219040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.859 [2024-07-26 11:17:30.219067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.859 qpair failed and we were unable to recover it. 00:29:10.859 [2024-07-26 11:17:30.219616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.859 [2024-07-26 11:17:30.219632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.859 qpair failed and we were unable to recover it. 00:29:10.859 [2024-07-26 11:17:30.220181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.859 [2024-07-26 11:17:30.220197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.859 qpair failed and we were unable to recover it. 00:29:10.859 [2024-07-26 11:17:30.220731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.859 [2024-07-26 11:17:30.220747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.859 qpair failed and we were unable to recover it. 00:29:10.859 [2024-07-26 11:17:30.221287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.859 [2024-07-26 11:17:30.221304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.859 qpair failed and we were unable to recover it. 00:29:10.859 [2024-07-26 11:17:30.221830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.859 [2024-07-26 11:17:30.221848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.859 qpair failed and we were unable to recover it. 00:29:10.859 [2024-07-26 11:17:30.222310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.859 [2024-07-26 11:17:30.222328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.859 qpair failed and we were unable to recover it. 00:29:10.859 [2024-07-26 11:17:30.222868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.859 [2024-07-26 11:17:30.222884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.859 qpair failed and we were unable to recover it. 00:29:10.859 [2024-07-26 11:17:30.223430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.859 [2024-07-26 11:17:30.223447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.859 qpair failed and we were unable to recover it. 00:29:10.859 [2024-07-26 11:17:30.223967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.859 [2024-07-26 11:17:30.223983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.859 qpair failed and we were unable to recover it. 00:29:10.859 [2024-07-26 11:17:30.224505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.859 [2024-07-26 11:17:30.224521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.859 qpair failed and we were unable to recover it. 00:29:10.859 [2024-07-26 11:17:30.225083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.859 [2024-07-26 11:17:30.225099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.859 qpair failed and we were unable to recover it. 00:29:10.859 [2024-07-26 11:17:30.225613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.859 [2024-07-26 11:17:30.225629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.859 qpair failed and we were unable to recover it. 00:29:10.859 [2024-07-26 11:17:30.226206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.859 [2024-07-26 11:17:30.226222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.859 qpair failed and we were unable to recover it. 00:29:10.859 [2024-07-26 11:17:30.226686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.859 [2024-07-26 11:17:30.226702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.859 qpair failed and we were unable to recover it. 00:29:10.859 [2024-07-26 11:17:30.227234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.859 [2024-07-26 11:17:30.227252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.859 qpair failed and we were unable to recover it. 00:29:10.859 [2024-07-26 11:17:30.227735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.859 [2024-07-26 11:17:30.227751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.859 qpair failed and we were unable to recover it. 00:29:10.859 [2024-07-26 11:17:30.228313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.859 [2024-07-26 11:17:30.228330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.859 qpair failed and we were unable to recover it. 00:29:10.860 [2024-07-26 11:17:30.228928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.860 [2024-07-26 11:17:30.228944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.860 qpair failed and we were unable to recover it. 00:29:10.860 [2024-07-26 11:17:30.229460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.860 [2024-07-26 11:17:30.229477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.860 qpair failed and we were unable to recover it. 00:29:10.860 [2024-07-26 11:17:30.230053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.860 [2024-07-26 11:17:30.230086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.860 qpair failed and we were unable to recover it. 00:29:10.860 [2024-07-26 11:17:30.230632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.860 [2024-07-26 11:17:30.230664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.860 qpair failed and we were unable to recover it. 00:29:10.860 [2024-07-26 11:17:30.231183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.860 [2024-07-26 11:17:30.231219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.860 qpair failed and we were unable to recover it. 00:29:10.860 [2024-07-26 11:17:30.231829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.860 [2024-07-26 11:17:30.231861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.860 qpair failed and we were unable to recover it. 00:29:10.860 [2024-07-26 11:17:30.232444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.860 [2024-07-26 11:17:30.232484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.860 qpair failed and we were unable to recover it. 00:29:10.860 [2024-07-26 11:17:30.233021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.860 [2024-07-26 11:17:30.233037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.860 qpair failed and we were unable to recover it. 00:29:10.860 [2024-07-26 11:17:30.233761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.860 [2024-07-26 11:17:30.233856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.860 qpair failed and we were unable to recover it. 00:29:10.860 [2024-07-26 11:17:30.234448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.860 [2024-07-26 11:17:30.234490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.860 qpair failed and we were unable to recover it. 00:29:10.860 [2024-07-26 11:17:30.235070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.860 [2024-07-26 11:17:30.235115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.860 qpair failed and we were unable to recover it. 00:29:10.860 [2024-07-26 11:17:30.235640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.860 [2024-07-26 11:17:30.235673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.860 qpair failed and we were unable to recover it. 00:29:10.860 [2024-07-26 11:17:30.236272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.860 [2024-07-26 11:17:30.236307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.860 qpair failed and we were unable to recover it. 00:29:10.860 [2024-07-26 11:17:30.236900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.860 [2024-07-26 11:17:30.236933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.860 qpair failed and we were unable to recover it. 00:29:10.860 [2024-07-26 11:17:30.237508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.860 [2024-07-26 11:17:30.237541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.860 qpair failed and we were unable to recover it. 00:29:10.860 [2024-07-26 11:17:30.238117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.860 [2024-07-26 11:17:30.238150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.860 qpair failed and we were unable to recover it. 00:29:10.860 [2024-07-26 11:17:30.238751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.860 [2024-07-26 11:17:30.238783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.860 qpair failed and we were unable to recover it. 00:29:10.860 [2024-07-26 11:17:30.239308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.860 [2024-07-26 11:17:30.239344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.860 qpair failed and we were unable to recover it. 00:29:10.860 [2024-07-26 11:17:30.239846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.860 [2024-07-26 11:17:30.239878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.860 qpair failed and we were unable to recover it. 00:29:10.860 [2024-07-26 11:17:30.240481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.860 [2024-07-26 11:17:30.240498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.860 qpair failed and we were unable to recover it. 00:29:10.860 [2024-07-26 11:17:30.241009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.860 [2024-07-26 11:17:30.241054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.860 qpair failed and we were unable to recover it. 00:29:10.860 [2024-07-26 11:17:30.241656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.860 [2024-07-26 11:17:30.241688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.860 qpair failed and we were unable to recover it. 00:29:10.860 [2024-07-26 11:17:30.242357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.860 [2024-07-26 11:17:30.242391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.860 qpair failed and we were unable to recover it. 00:29:10.860 [2024-07-26 11:17:30.242970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.860 [2024-07-26 11:17:30.243001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.860 qpair failed and we were unable to recover it. 00:29:10.860 [2024-07-26 11:17:30.243518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.860 [2024-07-26 11:17:30.243555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.860 qpair failed and we were unable to recover it. 00:29:10.860 [2024-07-26 11:17:30.244150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.860 [2024-07-26 11:17:30.244183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.860 qpair failed and we were unable to recover it. 00:29:10.860 [2024-07-26 11:17:30.244761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.860 [2024-07-26 11:17:30.244794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.860 qpair failed and we were unable to recover it. 00:29:10.860 [2024-07-26 11:17:30.245376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.860 [2024-07-26 11:17:30.245409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.860 qpair failed and we were unable to recover it. 00:29:10.860 [2024-07-26 11:17:30.246006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.860 [2024-07-26 11:17:30.246038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.860 qpair failed and we were unable to recover it. 00:29:10.860 [2024-07-26 11:17:30.246954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.860 [2024-07-26 11:17:30.246989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.860 qpair failed and we were unable to recover it. 00:29:10.860 [2024-07-26 11:17:30.247578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.860 [2024-07-26 11:17:30.247614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.860 qpair failed and we were unable to recover it. 00:29:10.860 [2024-07-26 11:17:30.248215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.860 [2024-07-26 11:17:30.248232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.860 qpair failed and we were unable to recover it. 00:29:10.860 [2024-07-26 11:17:30.248767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.860 [2024-07-26 11:17:30.248783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.860 qpair failed and we were unable to recover it. 00:29:10.860 [2024-07-26 11:17:30.249342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.860 [2024-07-26 11:17:30.249359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.861 qpair failed and we were unable to recover it. 00:29:10.861 [2024-07-26 11:17:30.249882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.861 [2024-07-26 11:17:30.249914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.861 qpair failed and we were unable to recover it. 00:29:10.861 [2024-07-26 11:17:30.250495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.861 [2024-07-26 11:17:30.250530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.861 qpair failed and we were unable to recover it. 00:29:10.861 [2024-07-26 11:17:30.251052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.861 [2024-07-26 11:17:30.251069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.861 qpair failed and we were unable to recover it. 00:29:10.861 [2024-07-26 11:17:30.251633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.861 [2024-07-26 11:17:30.251650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.861 qpair failed and we were unable to recover it. 00:29:10.861 [2024-07-26 11:17:30.252188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.861 [2024-07-26 11:17:30.252205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.861 qpair failed and we were unable to recover it. 00:29:10.861 [2024-07-26 11:17:30.252709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.861 [2024-07-26 11:17:30.252726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.861 qpair failed and we were unable to recover it. 00:29:10.861 [2024-07-26 11:17:30.253331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.861 [2024-07-26 11:17:30.253350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.861 qpair failed and we were unable to recover it. 00:29:10.861 [2024-07-26 11:17:30.253785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.861 [2024-07-26 11:17:30.253802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.861 qpair failed and we were unable to recover it. 00:29:10.861 [2024-07-26 11:17:30.254308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.861 [2024-07-26 11:17:30.254342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.861 qpair failed and we were unable to recover it. 00:29:10.861 [2024-07-26 11:17:30.254916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.861 [2024-07-26 11:17:30.254947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.861 qpair failed and we were unable to recover it. 00:29:10.861 [2024-07-26 11:17:30.255448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.861 [2024-07-26 11:17:30.255481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.861 qpair failed and we were unable to recover it. 00:29:10.861 [2024-07-26 11:17:30.256039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.861 [2024-07-26 11:17:30.256084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.861 qpair failed and we were unable to recover it. 00:29:10.861 [2024-07-26 11:17:30.256630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.861 [2024-07-26 11:17:30.256663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.861 qpair failed and we were unable to recover it. 00:29:10.861 [2024-07-26 11:17:30.257227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.861 [2024-07-26 11:17:30.257263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.861 qpair failed and we were unable to recover it. 00:29:10.861 [2024-07-26 11:17:30.257863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.861 [2024-07-26 11:17:30.257902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.861 qpair failed and we were unable to recover it. 00:29:10.861 [2024-07-26 11:17:30.258506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.861 [2024-07-26 11:17:30.258523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.861 qpair failed and we were unable to recover it. 00:29:10.861 [2024-07-26 11:17:30.258989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.861 [2024-07-26 11:17:30.259025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.861 qpair failed and we were unable to recover it. 00:29:10.861 [2024-07-26 11:17:30.259910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.861 [2024-07-26 11:17:30.259945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.861 qpair failed and we were unable to recover it. 00:29:10.861 [2024-07-26 11:17:30.260560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.861 [2024-07-26 11:17:30.260594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.861 qpair failed and we were unable to recover it. 00:29:10.861 [2024-07-26 11:17:30.261094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.861 [2024-07-26 11:17:30.261112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.861 qpair failed and we were unable to recover it. 00:29:10.861 [2024-07-26 11:17:30.261631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.861 [2024-07-26 11:17:30.261648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.861 qpair failed and we were unable to recover it. 00:29:10.861 [2024-07-26 11:17:30.262223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.861 [2024-07-26 11:17:30.262239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.861 qpair failed and we were unable to recover it. 00:29:10.861 [2024-07-26 11:17:30.262783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.861 [2024-07-26 11:17:30.262799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.861 qpair failed and we were unable to recover it. 00:29:10.861 [2024-07-26 11:17:30.263285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.861 [2024-07-26 11:17:30.263318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.861 qpair failed and we were unable to recover it. 00:29:10.861 [2024-07-26 11:17:30.263885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.861 [2024-07-26 11:17:30.263901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.861 qpair failed and we were unable to recover it. 00:29:10.861 [2024-07-26 11:17:30.264312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.861 [2024-07-26 11:17:30.264361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.861 qpair failed and we were unable to recover it. 00:29:10.861 [2024-07-26 11:17:30.264883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.861 [2024-07-26 11:17:30.264914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.861 qpair failed and we were unable to recover it. 00:29:10.861 [2024-07-26 11:17:30.265510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.861 [2024-07-26 11:17:30.265528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.861 qpair failed and we were unable to recover it. 00:29:10.861 [2024-07-26 11:17:30.266002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.861 [2024-07-26 11:17:30.266019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.861 qpair failed and we were unable to recover it. 00:29:10.861 [2024-07-26 11:17:30.266547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.861 [2024-07-26 11:17:30.266580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.861 qpair failed and we were unable to recover it. 00:29:10.861 [2024-07-26 11:17:30.267195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.861 [2024-07-26 11:17:30.267213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.861 qpair failed and we were unable to recover it. 00:29:10.861 [2024-07-26 11:17:30.267801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.861 [2024-07-26 11:17:30.267833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.861 qpair failed and we were unable to recover it. 00:29:10.861 [2024-07-26 11:17:30.268393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.861 [2024-07-26 11:17:30.268410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.861 qpair failed and we were unable to recover it. 00:29:10.861 [2024-07-26 11:17:30.268948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.861 [2024-07-26 11:17:30.268981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.861 qpair failed and we were unable to recover it. 00:29:10.861 [2024-07-26 11:17:30.269559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.862 [2024-07-26 11:17:30.269595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.862 qpair failed and we were unable to recover it. 00:29:10.862 [2024-07-26 11:17:30.270157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.862 [2024-07-26 11:17:30.270191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.862 qpair failed and we were unable to recover it. 00:29:10.862 [2024-07-26 11:17:30.270634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.862 [2024-07-26 11:17:30.270666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.862 qpair failed and we were unable to recover it. 00:29:10.862 [2024-07-26 11:17:30.271215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.862 [2024-07-26 11:17:30.271249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.862 qpair failed and we were unable to recover it. 00:29:10.862 [2024-07-26 11:17:30.271760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.862 [2024-07-26 11:17:30.271792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.862 qpair failed and we were unable to recover it. 00:29:10.862 [2024-07-26 11:17:30.272290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.862 [2024-07-26 11:17:30.272323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.862 qpair failed and we were unable to recover it. 00:29:10.862 [2024-07-26 11:17:30.272889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.862 [2024-07-26 11:17:30.272905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.862 qpair failed and we were unable to recover it. 00:29:10.862 [2024-07-26 11:17:30.273380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.862 [2024-07-26 11:17:30.273398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.862 qpair failed and we were unable to recover it. 00:29:10.862 [2024-07-26 11:17:30.273960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.862 [2024-07-26 11:17:30.273976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.862 qpair failed and we were unable to recover it. 00:29:10.862 [2024-07-26 11:17:30.274473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.862 [2024-07-26 11:17:30.274490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.862 qpair failed and we were unable to recover it. 00:29:10.862 [2024-07-26 11:17:30.274984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.862 [2024-07-26 11:17:30.275000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.862 qpair failed and we were unable to recover it. 00:29:10.862 [2024-07-26 11:17:30.275461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.862 [2024-07-26 11:17:30.275478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.862 qpair failed and we were unable to recover it. 00:29:10.862 [2024-07-26 11:17:30.275959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.862 [2024-07-26 11:17:30.275990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.862 qpair failed and we were unable to recover it. 00:29:10.862 [2024-07-26 11:17:30.276571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.862 [2024-07-26 11:17:30.276604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.862 qpair failed and we were unable to recover it. 00:29:10.862 [2024-07-26 11:17:30.277106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.862 [2024-07-26 11:17:30.277124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.862 qpair failed and we were unable to recover it. 00:29:10.862 [2024-07-26 11:17:30.277627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.862 [2024-07-26 11:17:30.277659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.862 qpair failed and we were unable to recover it. 00:29:10.862 [2024-07-26 11:17:30.278254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.862 [2024-07-26 11:17:30.278271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.862 qpair failed and we were unable to recover it. 00:29:10.862 [2024-07-26 11:17:30.278741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.862 [2024-07-26 11:17:30.278759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.862 qpair failed and we were unable to recover it. 00:29:10.862 [2024-07-26 11:17:30.279122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.862 [2024-07-26 11:17:30.279139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.862 qpair failed and we were unable to recover it. 00:29:10.862 [2024-07-26 11:17:30.279632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.862 [2024-07-26 11:17:30.279666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.862 qpair failed and we were unable to recover it. 00:29:10.862 [2024-07-26 11:17:30.280283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.862 [2024-07-26 11:17:30.280303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.862 qpair failed and we were unable to recover it. 00:29:10.862 [2024-07-26 11:17:30.280857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.862 [2024-07-26 11:17:30.280889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.862 qpair failed and we were unable to recover it. 00:29:10.862 [2024-07-26 11:17:30.281492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.862 [2024-07-26 11:17:30.281527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.862 qpair failed and we were unable to recover it. 00:29:10.862 [2024-07-26 11:17:30.282115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.862 [2024-07-26 11:17:30.282131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.862 qpair failed and we were unable to recover it. 00:29:10.862 [2024-07-26 11:17:30.282622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.862 [2024-07-26 11:17:30.282638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.862 qpair failed and we were unable to recover it. 00:29:10.862 [2024-07-26 11:17:30.283140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.862 [2024-07-26 11:17:30.283157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.862 qpair failed and we were unable to recover it. 00:29:10.862 [2024-07-26 11:17:30.283588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.862 [2024-07-26 11:17:30.283603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.862 qpair failed and we were unable to recover it. 00:29:10.862 [2024-07-26 11:17:30.284086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.862 [2024-07-26 11:17:30.284120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.862 qpair failed and we were unable to recover it. 00:29:10.862 [2024-07-26 11:17:30.284707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.862 [2024-07-26 11:17:30.284740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.862 qpair failed and we were unable to recover it. 00:29:10.862 [2024-07-26 11:17:30.285263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.862 [2024-07-26 11:17:30.285298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.862 qpair failed and we were unable to recover it. 00:29:10.862 [2024-07-26 11:17:30.285851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.862 [2024-07-26 11:17:30.285893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.862 qpair failed and we were unable to recover it. 00:29:10.862 [2024-07-26 11:17:30.286381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.862 [2024-07-26 11:17:30.286415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.862 qpair failed and we were unable to recover it. 00:29:10.862 [2024-07-26 11:17:30.286912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.862 [2024-07-26 11:17:30.286944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.862 qpair failed and we were unable to recover it. 00:29:10.862 [2024-07-26 11:17:30.287414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.862 [2024-07-26 11:17:30.287448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.862 qpair failed and we were unable to recover it. 00:29:10.862 [2024-07-26 11:17:30.288061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.862 [2024-07-26 11:17:30.288095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.862 qpair failed and we were unable to recover it. 00:29:10.862 [2024-07-26 11:17:30.288751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.862 [2024-07-26 11:17:30.288783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.862 qpair failed and we were unable to recover it. 00:29:10.862 [2024-07-26 11:17:30.289304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.862 [2024-07-26 11:17:30.289338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.862 qpair failed and we were unable to recover it. 00:29:10.862 [2024-07-26 11:17:30.289912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.862 [2024-07-26 11:17:30.289944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.862 qpair failed and we were unable to recover it. 00:29:10.863 [2024-07-26 11:17:30.290542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.863 [2024-07-26 11:17:30.290576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.863 qpair failed and we were unable to recover it. 00:29:10.863 [2024-07-26 11:17:30.291087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.863 [2024-07-26 11:17:30.291120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.863 qpair failed and we were unable to recover it. 00:29:10.863 [2024-07-26 11:17:30.291689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.863 [2024-07-26 11:17:30.291705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.863 qpair failed and we were unable to recover it. 00:29:10.863 [2024-07-26 11:17:30.292287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.863 [2024-07-26 11:17:30.292320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.863 qpair failed and we were unable to recover it. 00:29:10.863 [2024-07-26 11:17:30.292899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.863 [2024-07-26 11:17:30.292931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.863 qpair failed and we were unable to recover it. 00:29:10.863 [2024-07-26 11:17:30.293499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.863 [2024-07-26 11:17:30.293535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.863 qpair failed and we were unable to recover it. 00:29:10.863 [2024-07-26 11:17:30.294071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.863 [2024-07-26 11:17:30.294105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.863 qpair failed and we were unable to recover it. 00:29:10.863 [2024-07-26 11:17:30.294678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.863 [2024-07-26 11:17:30.294710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.863 qpair failed and we were unable to recover it. 00:29:10.863 [2024-07-26 11:17:30.295286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.863 [2024-07-26 11:17:30.295319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.863 qpair failed and we were unable to recover it. 00:29:10.863 [2024-07-26 11:17:30.295917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.863 [2024-07-26 11:17:30.295950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.863 qpair failed and we were unable to recover it. 00:29:10.863 [2024-07-26 11:17:30.296518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.863 [2024-07-26 11:17:30.296535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.863 qpair failed and we were unable to recover it. 00:29:10.863 [2024-07-26 11:17:30.297012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.863 [2024-07-26 11:17:30.297056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.863 qpair failed and we were unable to recover it. 00:29:10.863 [2024-07-26 11:17:30.297694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.863 [2024-07-26 11:17:30.297727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.863 qpair failed and we were unable to recover it. 00:29:10.863 [2024-07-26 11:17:30.298330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.863 [2024-07-26 11:17:30.298363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.863 qpair failed and we were unable to recover it. 00:29:10.863 [2024-07-26 11:17:30.298896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.863 [2024-07-26 11:17:30.298927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.863 qpair failed and we were unable to recover it. 00:29:10.863 [2024-07-26 11:17:30.299530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.863 [2024-07-26 11:17:30.299562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.863 qpair failed and we were unable to recover it. 00:29:10.863 [2024-07-26 11:17:30.300073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.863 [2024-07-26 11:17:30.300106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.863 qpair failed and we were unable to recover it. 00:29:10.863 [2024-07-26 11:17:30.300622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.863 [2024-07-26 11:17:30.300654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.863 qpair failed and we were unable to recover it. 00:29:10.863 [2024-07-26 11:17:30.301110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.863 [2024-07-26 11:17:30.301146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.863 qpair failed and we were unable to recover it. 00:29:10.863 [2024-07-26 11:17:30.301684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.863 [2024-07-26 11:17:30.301700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.863 qpair failed and we were unable to recover it. 00:29:10.863 [2024-07-26 11:17:30.302250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.863 [2024-07-26 11:17:30.302283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.863 qpair failed and we were unable to recover it. 00:29:10.863 [2024-07-26 11:17:30.302815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.863 [2024-07-26 11:17:30.302847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.863 qpair failed and we were unable to recover it. 00:29:10.863 [2024-07-26 11:17:30.303421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.863 [2024-07-26 11:17:30.303460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.863 qpair failed and we were unable to recover it. 00:29:10.863 [2024-07-26 11:17:30.303984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.863 [2024-07-26 11:17:30.304016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.863 qpair failed and we were unable to recover it. 00:29:10.863 [2024-07-26 11:17:30.304607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.863 [2024-07-26 11:17:30.304639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.863 qpair failed and we were unable to recover it. 00:29:10.863 [2024-07-26 11:17:30.305248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.863 [2024-07-26 11:17:30.305284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.863 qpair failed and we were unable to recover it. 00:29:10.863 [2024-07-26 11:17:30.305807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.863 [2024-07-26 11:17:30.305840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.863 qpair failed and we were unable to recover it. 00:29:10.863 [2024-07-26 11:17:30.306430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.863 [2024-07-26 11:17:30.306464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.863 qpair failed and we were unable to recover it. 00:29:10.863 [2024-07-26 11:17:30.307064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.863 [2024-07-26 11:17:30.307097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.863 qpair failed and we were unable to recover it. 00:29:10.863 [2024-07-26 11:17:30.307623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.863 [2024-07-26 11:17:30.307655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.863 qpair failed and we were unable to recover it. 00:29:10.863 [2024-07-26 11:17:30.308248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.863 [2024-07-26 11:17:30.308281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.863 qpair failed and we were unable to recover it. 00:29:10.863 [2024-07-26 11:17:30.308734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.863 [2024-07-26 11:17:30.308767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.863 qpair failed and we were unable to recover it. 00:29:10.863 [2024-07-26 11:17:30.309345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.863 [2024-07-26 11:17:30.309381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.863 qpair failed and we were unable to recover it. 00:29:10.863 [2024-07-26 11:17:30.309966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.863 [2024-07-26 11:17:30.309982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.863 qpair failed and we were unable to recover it. 00:29:10.863 [2024-07-26 11:17:30.310545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.863 [2024-07-26 11:17:30.310578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.863 qpair failed and we were unable to recover it. 00:29:10.863 [2024-07-26 11:17:30.311145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.863 [2024-07-26 11:17:30.311178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.863 qpair failed and we were unable to recover it. 00:29:10.863 [2024-07-26 11:17:30.311777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.863 [2024-07-26 11:17:30.311809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.863 qpair failed and we were unable to recover it. 00:29:10.863 [2024-07-26 11:17:30.312432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.864 [2024-07-26 11:17:30.312464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.864 qpair failed and we were unable to recover it. 00:29:10.864 [2024-07-26 11:17:30.313077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.864 [2024-07-26 11:17:30.313114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.864 qpair failed and we were unable to recover it. 00:29:10.864 [2024-07-26 11:17:30.313701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.864 [2024-07-26 11:17:30.313733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.864 qpair failed and we were unable to recover it. 00:29:10.864 [2024-07-26 11:17:30.314302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.864 [2024-07-26 11:17:30.314336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.864 qpair failed and we were unable to recover it. 00:29:10.864 [2024-07-26 11:17:30.314879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.864 [2024-07-26 11:17:30.314911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.864 qpair failed and we were unable to recover it. 00:29:10.864 [2024-07-26 11:17:30.315480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.864 [2024-07-26 11:17:30.315513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.864 qpair failed and we were unable to recover it. 00:29:10.864 [2024-07-26 11:17:30.316116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.864 [2024-07-26 11:17:30.316149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.864 qpair failed and we were unable to recover it. 00:29:10.864 [2024-07-26 11:17:30.316715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.864 [2024-07-26 11:17:30.316746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.864 qpair failed and we were unable to recover it. 00:29:10.864 [2024-07-26 11:17:30.317377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.864 [2024-07-26 11:17:30.317413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.864 qpair failed and we were unable to recover it. 00:29:10.864 [2024-07-26 11:17:30.317937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.864 [2024-07-26 11:17:30.317968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.864 qpair failed and we were unable to recover it. 00:29:10.864 [2024-07-26 11:17:30.318537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.864 [2024-07-26 11:17:30.318571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.864 qpair failed and we were unable to recover it. 00:29:10.864 [2024-07-26 11:17:30.319168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.864 [2024-07-26 11:17:30.319201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.864 qpair failed and we were unable to recover it. 00:29:10.864 [2024-07-26 11:17:30.319863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.864 [2024-07-26 11:17:30.319895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.864 qpair failed and we were unable to recover it. 00:29:10.864 [2024-07-26 11:17:30.320495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.864 [2024-07-26 11:17:30.320528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.864 qpair failed and we were unable to recover it. 00:29:10.864 [2024-07-26 11:17:30.321105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.864 [2024-07-26 11:17:30.321140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.864 qpair failed and we were unable to recover it. 00:29:10.864 [2024-07-26 11:17:30.321729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.864 [2024-07-26 11:17:30.321761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.864 qpair failed and we were unable to recover it. 00:29:10.864 [2024-07-26 11:17:30.322333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.864 [2024-07-26 11:17:30.322367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.864 qpair failed and we were unable to recover it. 00:29:10.864 [2024-07-26 11:17:30.322961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.864 [2024-07-26 11:17:30.322994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.864 qpair failed and we were unable to recover it. 00:29:10.864 [2024-07-26 11:17:30.323501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.864 [2024-07-26 11:17:30.323533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.864 qpair failed and we were unable to recover it. 00:29:10.864 [2024-07-26 11:17:30.324035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.864 [2024-07-26 11:17:30.324076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.864 qpair failed and we were unable to recover it. 00:29:10.864 [2024-07-26 11:17:30.324677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.864 [2024-07-26 11:17:30.324710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.864 qpair failed and we were unable to recover it. 00:29:10.864 [2024-07-26 11:17:30.325303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.864 [2024-07-26 11:17:30.325339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.864 qpair failed and we were unable to recover it. 00:29:10.864 [2024-07-26 11:17:30.325909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.864 [2024-07-26 11:17:30.325941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.864 qpair failed and we were unable to recover it. 00:29:10.864 [2024-07-26 11:17:30.326559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.864 [2024-07-26 11:17:30.326593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.864 qpair failed and we were unable to recover it. 00:29:10.864 [2024-07-26 11:17:30.327189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.864 [2024-07-26 11:17:30.327221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.864 qpair failed and we were unable to recover it. 00:29:10.864 [2024-07-26 11:17:30.327828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.864 [2024-07-26 11:17:30.327847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.864 qpair failed and we were unable to recover it. 00:29:10.864 [2024-07-26 11:17:30.328416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.864 [2024-07-26 11:17:30.328449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.864 qpair failed and we were unable to recover it. 00:29:10.864 [2024-07-26 11:17:30.329027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.864 [2024-07-26 11:17:30.329077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.864 qpair failed and we were unable to recover it. 00:29:10.864 [2024-07-26 11:17:30.329666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.864 [2024-07-26 11:17:30.329698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.864 qpair failed and we were unable to recover it. 00:29:10.864 [2024-07-26 11:17:30.330224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.864 [2024-07-26 11:17:30.330257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.864 qpair failed and we were unable to recover it. 00:29:10.864 [2024-07-26 11:17:30.330852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.864 [2024-07-26 11:17:30.330884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.864 qpair failed and we were unable to recover it. 00:29:10.864 [2024-07-26 11:17:30.331479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.864 [2024-07-26 11:17:30.331511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.864 qpair failed and we were unable to recover it. 00:29:10.864 [2024-07-26 11:17:30.332106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.864 [2024-07-26 11:17:30.332139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.864 qpair failed and we were unable to recover it. 00:29:10.864 [2024-07-26 11:17:30.332730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.864 [2024-07-26 11:17:30.332761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.864 qpair failed and we were unable to recover it. 00:29:10.864 [2024-07-26 11:17:30.333270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.864 [2024-07-26 11:17:30.333288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.864 qpair failed and we were unable to recover it. 00:29:10.864 [2024-07-26 11:17:30.333773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.864 [2024-07-26 11:17:30.333805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.864 qpair failed and we were unable to recover it. 00:29:10.864 [2024-07-26 11:17:30.334340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.864 [2024-07-26 11:17:30.334374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.864 qpair failed and we were unable to recover it. 00:29:10.864 [2024-07-26 11:17:30.334950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.864 [2024-07-26 11:17:30.334983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.864 qpair failed and we were unable to recover it. 00:29:10.865 [2024-07-26 11:17:30.335587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.865 [2024-07-26 11:17:30.335619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.865 qpair failed and we were unable to recover it. 00:29:10.865 [2024-07-26 11:17:30.336193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.865 [2024-07-26 11:17:30.336226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.865 qpair failed and we were unable to recover it. 00:29:10.865 [2024-07-26 11:17:30.336728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.865 [2024-07-26 11:17:30.336760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.865 qpair failed and we were unable to recover it. 00:29:10.865 [2024-07-26 11:17:30.337329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.865 [2024-07-26 11:17:30.337348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.865 qpair failed and we were unable to recover it. 00:29:10.865 [2024-07-26 11:17:30.337917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.865 [2024-07-26 11:17:30.337933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:10.865 qpair failed and we were unable to recover it. 00:29:11.133 [2024-07-26 11:17:30.338503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.133 [2024-07-26 11:17:30.338539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.133 qpair failed and we were unable to recover it. 00:29:11.133 [2024-07-26 11:17:30.339012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.133 [2024-07-26 11:17:30.339055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.133 qpair failed and we were unable to recover it. 00:29:11.133 [2024-07-26 11:17:30.339661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.133 [2024-07-26 11:17:30.339694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.133 qpair failed and we were unable to recover it. 00:29:11.133 [2024-07-26 11:17:30.340326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.134 [2024-07-26 11:17:30.340343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.134 qpair failed and we were unable to recover it. 00:29:11.134 [2024-07-26 11:17:30.340940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.134 [2024-07-26 11:17:30.340957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.134 qpair failed and we were unable to recover it. 00:29:11.134 [2024-07-26 11:17:30.341479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.134 [2024-07-26 11:17:30.341496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.134 qpair failed and we were unable to recover it. 00:29:11.134 [2024-07-26 11:17:30.342055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.134 [2024-07-26 11:17:30.342089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.134 qpair failed and we were unable to recover it. 00:29:11.134 [2024-07-26 11:17:30.342675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.134 [2024-07-26 11:17:30.342707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.134 qpair failed and we were unable to recover it. 00:29:11.134 [2024-07-26 11:17:30.343289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.134 [2024-07-26 11:17:30.343323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.134 qpair failed and we were unable to recover it. 00:29:11.134 [2024-07-26 11:17:30.343927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.134 [2024-07-26 11:17:30.343960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.134 qpair failed and we were unable to recover it. 00:29:11.134 [2024-07-26 11:17:30.344685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.134 [2024-07-26 11:17:30.344770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.134 qpair failed and we were unable to recover it. 00:29:11.134 [2024-07-26 11:17:30.345401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.134 [2024-07-26 11:17:30.345446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.134 qpair failed and we were unable to recover it. 00:29:11.134 [2024-07-26 11:17:30.346066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.134 [2024-07-26 11:17:30.346102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.134 qpair failed and we were unable to recover it. 00:29:11.134 [2024-07-26 11:17:30.346730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.134 [2024-07-26 11:17:30.346763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.134 qpair failed and we were unable to recover it. 00:29:11.134 [2024-07-26 11:17:30.347352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.134 [2024-07-26 11:17:30.347369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.134 qpair failed and we were unable to recover it. 00:29:11.134 [2024-07-26 11:17:30.347890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.134 [2024-07-26 11:17:30.347922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.134 qpair failed and we were unable to recover it. 00:29:11.134 [2024-07-26 11:17:30.348468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.134 [2024-07-26 11:17:30.348484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.134 qpair failed and we were unable to recover it. 00:29:11.134 [2024-07-26 11:17:30.349062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.134 [2024-07-26 11:17:30.349107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.134 qpair failed and we were unable to recover it. 00:29:11.134 [2024-07-26 11:17:30.349709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.134 [2024-07-26 11:17:30.349743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.134 qpair failed and we were unable to recover it. 00:29:11.134 [2024-07-26 11:17:30.350311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.134 [2024-07-26 11:17:30.350344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.134 qpair failed and we were unable to recover it. 00:29:11.134 [2024-07-26 11:17:30.350862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.134 [2024-07-26 11:17:30.350894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.134 qpair failed and we were unable to recover it. 00:29:11.134 [2024-07-26 11:17:30.351415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.134 [2024-07-26 11:17:30.351432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.134 qpair failed and we were unable to recover it. 00:29:11.134 [2024-07-26 11:17:30.351904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.134 [2024-07-26 11:17:30.351945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.134 qpair failed and we were unable to recover it. 00:29:11.134 [2024-07-26 11:17:30.352460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.134 [2024-07-26 11:17:30.352493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.134 qpair failed and we were unable to recover it. 00:29:11.134 [2024-07-26 11:17:30.353073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.134 [2024-07-26 11:17:30.353117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.134 qpair failed and we were unable to recover it. 00:29:11.134 [2024-07-26 11:17:30.353628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.134 [2024-07-26 11:17:30.353660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.134 qpair failed and we were unable to recover it. 00:29:11.134 [2024-07-26 11:17:30.354209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.134 [2024-07-26 11:17:30.354226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.134 qpair failed and we were unable to recover it. 00:29:11.134 [2024-07-26 11:17:30.354804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.134 [2024-07-26 11:17:30.354836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.134 qpair failed and we were unable to recover it. 00:29:11.134 [2024-07-26 11:17:30.355389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.134 [2024-07-26 11:17:30.355423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.134 qpair failed and we were unable to recover it. 00:29:11.134 [2024-07-26 11:17:30.355954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.134 [2024-07-26 11:17:30.355985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.134 qpair failed and we were unable to recover it. 00:29:11.134 [2024-07-26 11:17:30.356595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.134 [2024-07-26 11:17:30.356627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.134 qpair failed and we were unable to recover it. 00:29:11.134 [2024-07-26 11:17:30.357225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.134 [2024-07-26 11:17:30.357261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.134 qpair failed and we were unable to recover it. 00:29:11.134 [2024-07-26 11:17:30.357847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.134 [2024-07-26 11:17:30.357879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.134 qpair failed and we were unable to recover it. 00:29:11.134 [2024-07-26 11:17:30.358475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.134 [2024-07-26 11:17:30.358492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.134 qpair failed and we were unable to recover it. 00:29:11.134 [2024-07-26 11:17:30.359062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.134 [2024-07-26 11:17:30.359096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.134 qpair failed and we were unable to recover it. 00:29:11.134 [2024-07-26 11:17:30.359651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.134 [2024-07-26 11:17:30.359684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.134 qpair failed and we were unable to recover it. 00:29:11.134 [2024-07-26 11:17:30.360200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.134 [2024-07-26 11:17:30.360234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.134 qpair failed and we were unable to recover it. 00:29:11.134 [2024-07-26 11:17:30.360679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.134 [2024-07-26 11:17:30.360710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.134 qpair failed and we were unable to recover it. 00:29:11.134 [2024-07-26 11:17:30.361280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.134 [2024-07-26 11:17:30.361316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.134 qpair failed and we were unable to recover it. 00:29:11.134 [2024-07-26 11:17:30.361844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.134 [2024-07-26 11:17:30.361877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.134 qpair failed and we were unable to recover it. 00:29:11.134 [2024-07-26 11:17:30.362402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.135 [2024-07-26 11:17:30.362435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.135 qpair failed and we were unable to recover it. 00:29:11.135 [2024-07-26 11:17:30.363024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.135 [2024-07-26 11:17:30.363069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.135 qpair failed and we were unable to recover it. 00:29:11.135 [2024-07-26 11:17:30.363649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.135 [2024-07-26 11:17:30.363681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.135 qpair failed and we were unable to recover it. 00:29:11.135 [2024-07-26 11:17:30.364293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.135 [2024-07-26 11:17:30.364326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.135 qpair failed and we were unable to recover it. 00:29:11.135 [2024-07-26 11:17:30.364925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.135 [2024-07-26 11:17:30.364957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.135 qpair failed and we were unable to recover it. 00:29:11.135 [2024-07-26 11:17:30.365482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.135 [2024-07-26 11:17:30.365519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.135 qpair failed and we were unable to recover it. 00:29:11.135 [2024-07-26 11:17:30.366105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.135 [2024-07-26 11:17:30.366139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.135 qpair failed and we were unable to recover it. 00:29:11.135 [2024-07-26 11:17:30.366731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.135 [2024-07-26 11:17:30.366763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.135 qpair failed and we were unable to recover it. 00:29:11.135 [2024-07-26 11:17:30.367279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.135 [2024-07-26 11:17:30.367312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.135 qpair failed and we were unable to recover it. 00:29:11.135 [2024-07-26 11:17:30.367866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.135 [2024-07-26 11:17:30.367899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.135 qpair failed and we were unable to recover it. 00:29:11.135 [2024-07-26 11:17:30.368533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.135 [2024-07-26 11:17:30.368568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.135 qpair failed and we were unable to recover it. 00:29:11.135 [2024-07-26 11:17:30.369167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.135 [2024-07-26 11:17:30.369185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.135 qpair failed and we were unable to recover it. 00:29:11.135 [2024-07-26 11:17:30.369701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.135 [2024-07-26 11:17:30.369733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.135 qpair failed and we were unable to recover it. 00:29:11.135 [2024-07-26 11:17:30.370291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.135 [2024-07-26 11:17:30.370308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.135 qpair failed and we were unable to recover it. 00:29:11.135 [2024-07-26 11:17:30.370792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.135 [2024-07-26 11:17:30.370808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.135 qpair failed and we were unable to recover it. 00:29:11.135 [2024-07-26 11:17:30.371297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.135 [2024-07-26 11:17:30.371330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.135 qpair failed and we were unable to recover it. 00:29:11.135 [2024-07-26 11:17:30.371904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.135 [2024-07-26 11:17:30.371936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.135 qpair failed and we were unable to recover it. 00:29:11.135 [2024-07-26 11:17:30.372529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.135 [2024-07-26 11:17:30.372546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.135 qpair failed and we were unable to recover it. 00:29:11.135 [2024-07-26 11:17:30.373069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.135 [2024-07-26 11:17:30.373088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.135 qpair failed and we were unable to recover it. 00:29:11.135 [2024-07-26 11:17:30.373637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.135 [2024-07-26 11:17:30.373670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.135 qpair failed and we were unable to recover it. 00:29:11.135 [2024-07-26 11:17:30.374275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.135 [2024-07-26 11:17:30.374308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.135 qpair failed and we were unable to recover it. 00:29:11.135 [2024-07-26 11:17:30.374815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.135 [2024-07-26 11:17:30.374847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.135 qpair failed and we were unable to recover it. 00:29:11.135 [2024-07-26 11:17:30.375354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.135 [2024-07-26 11:17:30.375394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.135 qpair failed and we were unable to recover it. 00:29:11.135 [2024-07-26 11:17:30.375996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.135 [2024-07-26 11:17:30.376028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.135 qpair failed and we were unable to recover it. 00:29:11.135 [2024-07-26 11:17:30.376565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.135 [2024-07-26 11:17:30.376598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.135 qpair failed and we were unable to recover it. 00:29:11.135 [2024-07-26 11:17:30.377166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.135 [2024-07-26 11:17:30.377184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.135 qpair failed and we were unable to recover it. 00:29:11.135 [2024-07-26 11:17:30.377726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.135 [2024-07-26 11:17:30.377743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.135 qpair failed and we were unable to recover it. 00:29:11.135 [2024-07-26 11:17:30.378213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.135 [2024-07-26 11:17:30.378246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.135 qpair failed and we were unable to recover it. 00:29:11.135 [2024-07-26 11:17:30.378822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.135 [2024-07-26 11:17:30.378861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.135 qpair failed and we were unable to recover it. 00:29:11.135 [2024-07-26 11:17:30.379341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.135 [2024-07-26 11:17:30.379357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.135 qpair failed and we were unable to recover it. 00:29:11.135 [2024-07-26 11:17:30.379927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.135 [2024-07-26 11:17:30.379945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.135 qpair failed and we were unable to recover it. 00:29:11.135 [2024-07-26 11:17:30.380485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.135 [2024-07-26 11:17:30.380519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.135 qpair failed and we were unable to recover it. 00:29:11.135 [2024-07-26 11:17:30.381093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.135 [2024-07-26 11:17:30.381128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.135 qpair failed and we were unable to recover it. 00:29:11.135 [2024-07-26 11:17:30.381693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.135 [2024-07-26 11:17:30.381710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.135 qpair failed and we were unable to recover it. 00:29:11.135 [2024-07-26 11:17:30.382190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.135 [2024-07-26 11:17:30.382208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.135 qpair failed and we were unable to recover it. 00:29:11.135 [2024-07-26 11:17:30.382730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.135 [2024-07-26 11:17:30.382762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.135 qpair failed and we were unable to recover it. 00:29:11.135 [2024-07-26 11:17:30.383294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.135 [2024-07-26 11:17:30.383328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.135 qpair failed and we were unable to recover it. 00:29:11.135 [2024-07-26 11:17:30.383863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.136 [2024-07-26 11:17:30.383894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.136 qpair failed and we were unable to recover it. 00:29:11.136 [2024-07-26 11:17:30.384395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.136 [2024-07-26 11:17:30.384429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.136 qpair failed and we were unable to recover it. 00:29:11.136 [2024-07-26 11:17:30.385000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.136 [2024-07-26 11:17:30.385032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.136 qpair failed and we were unable to recover it. 00:29:11.136 [2024-07-26 11:17:30.385571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.136 [2024-07-26 11:17:30.385605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.136 qpair failed and we were unable to recover it. 00:29:11.136 [2024-07-26 11:17:30.386182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.136 [2024-07-26 11:17:30.386216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.136 qpair failed and we were unable to recover it. 00:29:11.136 [2024-07-26 11:17:30.386717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.136 [2024-07-26 11:17:30.386749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.136 qpair failed and we were unable to recover it. 00:29:11.136 [2024-07-26 11:17:30.387248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.136 [2024-07-26 11:17:30.387264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.136 qpair failed and we were unable to recover it. 00:29:11.136 [2024-07-26 11:17:30.387802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.136 [2024-07-26 11:17:30.387835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.136 qpair failed and we were unable to recover it. 00:29:11.136 [2024-07-26 11:17:30.388345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.136 [2024-07-26 11:17:30.388378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.136 qpair failed and we were unable to recover it. 00:29:11.136 [2024-07-26 11:17:30.388968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.136 [2024-07-26 11:17:30.389000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.136 qpair failed and we were unable to recover it. 00:29:11.136 [2024-07-26 11:17:30.389529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.136 [2024-07-26 11:17:30.389566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.136 qpair failed and we were unable to recover it. 00:29:11.136 [2024-07-26 11:17:30.390135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.136 [2024-07-26 11:17:30.390168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.136 qpair failed and we were unable to recover it. 00:29:11.136 [2024-07-26 11:17:30.390745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.136 [2024-07-26 11:17:30.390784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.136 qpair failed and we were unable to recover it. 00:29:11.136 [2024-07-26 11:17:30.391287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.136 [2024-07-26 11:17:30.391321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.136 qpair failed and we were unable to recover it. 00:29:11.136 [2024-07-26 11:17:30.391942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.136 [2024-07-26 11:17:30.391975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.136 qpair failed and we were unable to recover it. 00:29:11.136 [2024-07-26 11:17:30.392551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.136 [2024-07-26 11:17:30.392584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.136 qpair failed and we were unable to recover it. 00:29:11.136 [2024-07-26 11:17:30.393024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.136 [2024-07-26 11:17:30.393076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.136 qpair failed and we were unable to recover it. 00:29:11.136 [2024-07-26 11:17:30.393643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.136 [2024-07-26 11:17:30.393676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.136 qpair failed and we were unable to recover it. 00:29:11.136 [2024-07-26 11:17:30.394270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.136 [2024-07-26 11:17:30.394304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.136 qpair failed and we were unable to recover it. 00:29:11.136 [2024-07-26 11:17:30.394895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.136 [2024-07-26 11:17:30.394926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.136 qpair failed and we were unable to recover it. 00:29:11.136 [2024-07-26 11:17:30.395444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.136 [2024-07-26 11:17:30.395477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.136 qpair failed and we were unable to recover it. 00:29:11.136 [2024-07-26 11:17:30.396057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.136 [2024-07-26 11:17:30.396091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.136 qpair failed and we were unable to recover it. 00:29:11.136 [2024-07-26 11:17:30.396679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.136 [2024-07-26 11:17:30.396711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.136 qpair failed and we were unable to recover it. 00:29:11.136 [2024-07-26 11:17:30.397306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.136 [2024-07-26 11:17:30.397341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.136 qpair failed and we were unable to recover it. 00:29:11.136 [2024-07-26 11:17:30.397917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.136 [2024-07-26 11:17:30.397950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.136 qpair failed and we were unable to recover it. 00:29:11.136 [2024-07-26 11:17:30.398713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.136 [2024-07-26 11:17:30.398800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.136 qpair failed and we were unable to recover it. 00:29:11.136 [2024-07-26 11:17:30.399461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.136 [2024-07-26 11:17:30.399504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.136 qpair failed and we were unable to recover it. 00:29:11.136 [2024-07-26 11:17:30.400140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.136 [2024-07-26 11:17:30.400158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.136 qpair failed and we were unable to recover it. 00:29:11.136 [2024-07-26 11:17:30.400729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.136 [2024-07-26 11:17:30.400763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.136 qpair failed and we were unable to recover it. 00:29:11.136 [2024-07-26 11:17:30.401348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.136 [2024-07-26 11:17:30.401384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.136 qpair failed and we were unable to recover it. 00:29:11.136 [2024-07-26 11:17:30.401993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.136 [2024-07-26 11:17:30.402026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.136 qpair failed and we were unable to recover it. 00:29:11.136 [2024-07-26 11:17:30.402625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.136 [2024-07-26 11:17:30.402658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.136 qpair failed and we were unable to recover it. 00:29:11.136 [2024-07-26 11:17:30.403196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.136 [2024-07-26 11:17:30.403214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.136 qpair failed and we were unable to recover it. 00:29:11.136 [2024-07-26 11:17:30.403759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.136 [2024-07-26 11:17:30.403790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.136 qpair failed and we were unable to recover it. 00:29:11.136 [2024-07-26 11:17:30.404400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.136 [2024-07-26 11:17:30.404433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.136 qpair failed and we were unable to recover it. 00:29:11.136 [2024-07-26 11:17:30.404947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.136 [2024-07-26 11:17:30.404979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.136 qpair failed and we were unable to recover it. 00:29:11.136 [2024-07-26 11:17:30.405478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.136 [2024-07-26 11:17:30.405515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.136 qpair failed and we were unable to recover it. 00:29:11.136 [2024-07-26 11:17:30.406109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.136 [2024-07-26 11:17:30.406143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.137 qpair failed and we were unable to recover it. 00:29:11.137 [2024-07-26 11:17:30.406739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.137 [2024-07-26 11:17:30.406771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.137 qpair failed and we were unable to recover it. 00:29:11.137 [2024-07-26 11:17:30.407374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.137 [2024-07-26 11:17:30.407408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.137 qpair failed and we were unable to recover it. 00:29:11.137 [2024-07-26 11:17:30.407976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.137 [2024-07-26 11:17:30.408008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.137 qpair failed and we were unable to recover it. 00:29:11.137 [2024-07-26 11:17:30.408533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.137 [2024-07-26 11:17:30.408567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.137 qpair failed and we were unable to recover it. 00:29:11.137 [2024-07-26 11:17:30.409154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.137 [2024-07-26 11:17:30.409190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.137 qpair failed and we were unable to recover it. 00:29:11.137 [2024-07-26 11:17:30.409754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.137 [2024-07-26 11:17:30.409786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.137 qpair failed and we were unable to recover it. 00:29:11.137 [2024-07-26 11:17:30.410307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.137 [2024-07-26 11:17:30.410340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.137 qpair failed and we were unable to recover it. 00:29:11.137 [2024-07-26 11:17:30.410841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.137 [2024-07-26 11:17:30.410873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.137 qpair failed and we were unable to recover it. 00:29:11.137 [2024-07-26 11:17:30.411421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.137 [2024-07-26 11:17:30.411454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.137 qpair failed and we were unable to recover it. 00:29:11.137 [2024-07-26 11:17:30.412025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.137 [2024-07-26 11:17:30.412067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.137 qpair failed and we were unable to recover it. 00:29:11.137 [2024-07-26 11:17:30.412567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.137 [2024-07-26 11:17:30.412600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.137 qpair failed and we were unable to recover it. 00:29:11.137 [2024-07-26 11:17:30.413124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.137 [2024-07-26 11:17:30.413161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.137 qpair failed and we were unable to recover it. 00:29:11.137 [2024-07-26 11:17:30.413783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.137 [2024-07-26 11:17:30.413816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.137 qpair failed and we were unable to recover it. 00:29:11.137 [2024-07-26 11:17:30.414404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.137 [2024-07-26 11:17:30.414422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.137 qpair failed and we were unable to recover it. 00:29:11.137 [2024-07-26 11:17:30.414939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.137 [2024-07-26 11:17:30.414960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.137 qpair failed and we were unable to recover it. 00:29:11.137 [2024-07-26 11:17:30.415509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.137 [2024-07-26 11:17:30.415542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.137 qpair failed and we were unable to recover it. 00:29:11.137 [2024-07-26 11:17:30.415984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.137 [2024-07-26 11:17:30.416015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.137 qpair failed and we were unable to recover it. 00:29:11.137 [2024-07-26 11:17:30.416549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.137 [2024-07-26 11:17:30.416583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.137 qpair failed and we were unable to recover it. 00:29:11.137 [2024-07-26 11:17:30.417165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.137 [2024-07-26 11:17:30.417201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.137 qpair failed and we were unable to recover it. 00:29:11.137 [2024-07-26 11:17:30.417732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.137 [2024-07-26 11:17:30.417765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.137 qpair failed and we were unable to recover it. 00:29:11.137 [2024-07-26 11:17:30.418264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.137 [2024-07-26 11:17:30.418298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.137 qpair failed and we were unable to recover it. 00:29:11.137 [2024-07-26 11:17:30.418856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.137 [2024-07-26 11:17:30.418887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.137 qpair failed and we were unable to recover it. 00:29:11.137 [2024-07-26 11:17:30.419473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.137 [2024-07-26 11:17:30.419506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.137 qpair failed and we were unable to recover it. 00:29:11.137 [2024-07-26 11:17:30.420081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.137 [2024-07-26 11:17:30.420113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.137 qpair failed and we were unable to recover it. 00:29:11.137 [2024-07-26 11:17:30.420688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.137 [2024-07-26 11:17:30.420721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.137 qpair failed and we were unable to recover it. 00:29:11.137 [2024-07-26 11:17:30.421293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.137 [2024-07-26 11:17:30.421329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.137 qpair failed and we were unable to recover it. 00:29:11.137 [2024-07-26 11:17:30.421866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.137 [2024-07-26 11:17:30.421899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.137 qpair failed and we were unable to recover it. 00:29:11.137 [2024-07-26 11:17:30.422500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.137 [2024-07-26 11:17:30.422532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.137 qpair failed and we were unable to recover it. 00:29:11.137 [2024-07-26 11:17:30.423057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.137 [2024-07-26 11:17:30.423090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.137 qpair failed and we were unable to recover it. 00:29:11.137 [2024-07-26 11:17:30.423678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.137 [2024-07-26 11:17:30.423710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.137 qpair failed and we were unable to recover it. 00:29:11.137 [2024-07-26 11:17:30.424298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.137 [2024-07-26 11:17:30.424331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.137 qpair failed and we were unable to recover it. 00:29:11.137 [2024-07-26 11:17:30.424929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.137 [2024-07-26 11:17:30.424962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.137 qpair failed and we were unable to recover it. 00:29:11.137 [2024-07-26 11:17:30.425558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.137 [2024-07-26 11:17:30.425596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.137 qpair failed and we were unable to recover it. 00:29:11.137 [2024-07-26 11:17:30.426121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.137 [2024-07-26 11:17:30.426155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.137 qpair failed and we were unable to recover it. 00:29:11.137 [2024-07-26 11:17:30.426648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.137 [2024-07-26 11:17:30.426680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.137 qpair failed and we were unable to recover it. 00:29:11.137 [2024-07-26 11:17:30.427289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.137 [2024-07-26 11:17:30.427322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.137 qpair failed and we were unable to recover it. 00:29:11.137 [2024-07-26 11:17:30.427834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.137 [2024-07-26 11:17:30.427865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.137 qpair failed and we were unable to recover it. 00:29:11.137 [2024-07-26 11:17:30.428318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.138 [2024-07-26 11:17:30.428335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.138 qpair failed and we were unable to recover it. 00:29:11.138 [2024-07-26 11:17:30.428805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.138 [2024-07-26 11:17:30.428837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.138 qpair failed and we were unable to recover it. 00:29:11.138 [2024-07-26 11:17:30.429346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.138 [2024-07-26 11:17:30.429382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.138 qpair failed and we were unable to recover it. 00:29:11.138 [2024-07-26 11:17:30.429960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.138 [2024-07-26 11:17:30.429992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.138 qpair failed and we were unable to recover it. 00:29:11.138 [2024-07-26 11:17:30.430579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.138 [2024-07-26 11:17:30.430612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.138 qpair failed and we were unable to recover it. 00:29:11.138 [2024-07-26 11:17:30.431300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.138 [2024-07-26 11:17:30.431333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.138 qpair failed and we were unable to recover it. 00:29:11.138 [2024-07-26 11:17:30.431848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.138 [2024-07-26 11:17:30.431880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.138 qpair failed and we were unable to recover it. 00:29:11.138 [2024-07-26 11:17:30.432484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.138 [2024-07-26 11:17:30.432517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.138 qpair failed and we were unable to recover it. 00:29:11.138 [2024-07-26 11:17:30.433107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.138 [2024-07-26 11:17:30.433143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.138 qpair failed and we were unable to recover it. 00:29:11.138 [2024-07-26 11:17:30.433607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.138 [2024-07-26 11:17:30.433624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.138 qpair failed and we were unable to recover it. 00:29:11.138 [2024-07-26 11:17:30.433990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.138 [2024-07-26 11:17:30.434021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.138 qpair failed and we were unable to recover it. 00:29:11.138 [2024-07-26 11:17:30.434556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.138 [2024-07-26 11:17:30.434589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.138 qpair failed and we were unable to recover it. 00:29:11.138 [2024-07-26 11:17:30.435159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.138 [2024-07-26 11:17:30.435192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.138 qpair failed and we were unable to recover it. 00:29:11.138 [2024-07-26 11:17:30.435787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.138 [2024-07-26 11:17:30.435819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.138 qpair failed and we were unable to recover it. 00:29:11.138 [2024-07-26 11:17:30.436414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.138 [2024-07-26 11:17:30.436447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.138 qpair failed and we were unable to recover it. 00:29:11.138 [2024-07-26 11:17:30.437041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.138 [2024-07-26 11:17:30.437096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.138 qpair failed and we were unable to recover it. 00:29:11.138 [2024-07-26 11:17:30.437603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.138 [2024-07-26 11:17:30.437620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.138 qpair failed and we were unable to recover it. 00:29:11.138 [2024-07-26 11:17:30.438203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.138 [2024-07-26 11:17:30.438243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.138 qpair failed and we were unable to recover it. 00:29:11.138 [2024-07-26 11:17:30.438671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.138 [2024-07-26 11:17:30.438703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.138 qpair failed and we were unable to recover it. 00:29:11.138 [2024-07-26 11:17:30.439277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.138 [2024-07-26 11:17:30.439310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.138 qpair failed and we were unable to recover it. 00:29:11.138 [2024-07-26 11:17:30.439894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.138 [2024-07-26 11:17:30.439926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.138 qpair failed and we were unable to recover it. 00:29:11.138 [2024-07-26 11:17:30.440722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.138 [2024-07-26 11:17:30.440757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.138 qpair failed and we were unable to recover it. 00:29:11.138 [2024-07-26 11:17:30.441378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.138 [2024-07-26 11:17:30.441413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.138 qpair failed and we were unable to recover it. 00:29:11.138 [2024-07-26 11:17:30.442038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.138 [2024-07-26 11:17:30.442081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.138 qpair failed and we were unable to recover it. 00:29:11.138 [2024-07-26 11:17:30.442643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.138 [2024-07-26 11:17:30.442676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.138 qpair failed and we were unable to recover it. 00:29:11.138 [2024-07-26 11:17:30.443247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.138 [2024-07-26 11:17:30.443291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.138 qpair failed and we were unable to recover it. 00:29:11.138 [2024-07-26 11:17:30.443765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.138 [2024-07-26 11:17:30.443800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.138 qpair failed and we were unable to recover it. 00:29:11.138 [2024-07-26 11:17:30.444230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.138 [2024-07-26 11:17:30.444247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.138 qpair failed and we were unable to recover it. 00:29:11.138 [2024-07-26 11:17:30.444723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.138 [2024-07-26 11:17:30.444754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.138 qpair failed and we were unable to recover it. 00:29:11.138 [2024-07-26 11:17:30.445316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.138 [2024-07-26 11:17:30.445352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.138 qpair failed and we were unable to recover it. 00:29:11.139 [2024-07-26 11:17:30.445922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.139 [2024-07-26 11:17:30.445954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.139 qpair failed and we were unable to recover it. 00:29:11.139 [2024-07-26 11:17:30.446574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.139 [2024-07-26 11:17:30.446609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.139 qpair failed and we were unable to recover it. 00:29:11.139 [2024-07-26 11:17:30.447183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.139 [2024-07-26 11:17:30.447216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.139 qpair failed and we were unable to recover it. 00:29:11.139 [2024-07-26 11:17:30.447717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.139 [2024-07-26 11:17:30.447749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.139 qpair failed and we were unable to recover it. 00:29:11.139 [2024-07-26 11:17:30.448328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.139 [2024-07-26 11:17:30.448361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.139 qpair failed and we were unable to recover it. 00:29:11.139 [2024-07-26 11:17:30.448900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.139 [2024-07-26 11:17:30.448931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.139 qpair failed and we were unable to recover it. 00:29:11.139 [2024-07-26 11:17:30.449489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.139 [2024-07-26 11:17:30.449525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.139 qpair failed and we were unable to recover it. 00:29:11.139 [2024-07-26 11:17:30.450094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.139 [2024-07-26 11:17:30.450127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.139 qpair failed and we were unable to recover it. 00:29:11.139 [2024-07-26 11:17:30.450702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.139 [2024-07-26 11:17:30.450734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.139 qpair failed and we were unable to recover it. 00:29:11.139 [2024-07-26 11:17:30.451321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.139 [2024-07-26 11:17:30.451360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.139 qpair failed and we were unable to recover it. 00:29:11.139 [2024-07-26 11:17:30.451917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.139 [2024-07-26 11:17:30.451949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.139 qpair failed and we were unable to recover it. 00:29:11.139 [2024-07-26 11:17:30.452463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.139 [2024-07-26 11:17:30.452496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.139 qpair failed and we were unable to recover it. 00:29:11.139 [2024-07-26 11:17:30.452992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.139 [2024-07-26 11:17:30.453024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.139 qpair failed and we were unable to recover it. 00:29:11.139 [2024-07-26 11:17:30.453624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.139 [2024-07-26 11:17:30.453659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.139 qpair failed and we were unable to recover it. 00:29:11.139 [2024-07-26 11:17:30.454191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.139 [2024-07-26 11:17:30.454225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.139 qpair failed and we were unable to recover it. 00:29:11.139 [2024-07-26 11:17:30.455073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.139 [2024-07-26 11:17:30.455107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.139 qpair failed and we were unable to recover it. 00:29:11.139 [2024-07-26 11:17:30.455732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.139 [2024-07-26 11:17:30.455749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.139 qpair failed and we were unable to recover it. 00:29:11.139 [2024-07-26 11:17:30.456316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.139 [2024-07-26 11:17:30.456349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.139 qpair failed and we were unable to recover it. 00:29:11.139 [2024-07-26 11:17:30.456926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.139 [2024-07-26 11:17:30.456959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.139 qpair failed and we were unable to recover it. 00:29:11.139 [2024-07-26 11:17:30.457532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.139 [2024-07-26 11:17:30.457568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.139 qpair failed and we were unable to recover it. 00:29:11.139 [2024-07-26 11:17:30.458437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.139 [2024-07-26 11:17:30.458472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.139 qpair failed and we were unable to recover it. 00:29:11.139 [2024-07-26 11:17:30.459054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.139 [2024-07-26 11:17:30.459087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.139 qpair failed and we were unable to recover it. 00:29:11.139 [2024-07-26 11:17:30.459693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.139 [2024-07-26 11:17:30.459724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.139 qpair failed and we were unable to recover it. 00:29:11.139 [2024-07-26 11:17:30.460324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.139 [2024-07-26 11:17:30.460358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.139 qpair failed and we were unable to recover it. 00:29:11.139 [2024-07-26 11:17:30.460757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.139 [2024-07-26 11:17:30.460788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.139 qpair failed and we were unable to recover it. 00:29:11.139 [2024-07-26 11:17:30.461242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.139 [2024-07-26 11:17:30.461277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.139 qpair failed and we were unable to recover it. 00:29:11.139 [2024-07-26 11:17:30.461723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.139 [2024-07-26 11:17:30.461755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.139 qpair failed and we were unable to recover it. 00:29:11.139 [2024-07-26 11:17:30.462251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.139 [2024-07-26 11:17:30.462290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.139 qpair failed and we were unable to recover it. 00:29:11.139 [2024-07-26 11:17:30.462833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.139 [2024-07-26 11:17:30.462864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.139 qpair failed and we were unable to recover it. 00:29:11.139 [2024-07-26 11:17:30.463497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.139 [2024-07-26 11:17:30.463529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.139 qpair failed and we were unable to recover it. 00:29:11.139 [2024-07-26 11:17:30.464105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.139 [2024-07-26 11:17:30.464138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.139 qpair failed and we were unable to recover it. 00:29:11.139 [2024-07-26 11:17:30.464584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.139 [2024-07-26 11:17:30.464601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.139 qpair failed and we were unable to recover it. 00:29:11.139 [2024-07-26 11:17:30.465155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.139 [2024-07-26 11:17:30.465190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.139 qpair failed and we were unable to recover it. 00:29:11.139 [2024-07-26 11:17:30.465769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.139 [2024-07-26 11:17:30.465800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.139 qpair failed and we were unable to recover it. 00:29:11.139 [2024-07-26 11:17:30.466414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.140 [2024-07-26 11:17:30.466447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.140 qpair failed and we were unable to recover it. 00:29:11.140 [2024-07-26 11:17:30.467037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.140 [2024-07-26 11:17:30.467081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.140 qpair failed and we were unable to recover it. 00:29:11.140 [2024-07-26 11:17:30.467678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.140 [2024-07-26 11:17:30.467710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.140 qpair failed and we were unable to recover it. 00:29:11.140 [2024-07-26 11:17:30.468313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.140 [2024-07-26 11:17:30.468346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.140 qpair failed and we were unable to recover it. 00:29:11.140 [2024-07-26 11:17:30.469193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.140 [2024-07-26 11:17:30.469212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.140 qpair failed and we were unable to recover it. 00:29:11.140 [2024-07-26 11:17:30.469779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.140 [2024-07-26 11:17:30.469813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.140 qpair failed and we were unable to recover it. 00:29:11.140 [2024-07-26 11:17:30.470343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.140 [2024-07-26 11:17:30.470375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.140 qpair failed and we were unable to recover it. 00:29:11.140 [2024-07-26 11:17:30.470947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.140 [2024-07-26 11:17:30.470979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.140 qpair failed and we were unable to recover it. 00:29:11.140 [2024-07-26 11:17:30.471822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.140 [2024-07-26 11:17:30.471857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.140 qpair failed and we were unable to recover it. 00:29:11.140 [2024-07-26 11:17:30.472468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.140 [2024-07-26 11:17:30.472519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.140 qpair failed and we were unable to recover it. 00:29:11.140 [2024-07-26 11:17:30.473124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.140 [2024-07-26 11:17:30.473160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.140 qpair failed and we were unable to recover it. 00:29:11.140 [2024-07-26 11:17:30.473760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.140 [2024-07-26 11:17:30.473791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.140 qpair failed and we were unable to recover it. 00:29:11.140 [2024-07-26 11:17:30.474361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.140 [2024-07-26 11:17:30.474378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.140 qpair failed and we were unable to recover it. 00:29:11.140 [2024-07-26 11:17:30.474941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.140 [2024-07-26 11:17:30.474973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.140 qpair failed and we were unable to recover it. 00:29:11.140 [2024-07-26 11:17:30.475548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.140 [2024-07-26 11:17:30.475588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.140 qpair failed and we were unable to recover it. 00:29:11.140 [2024-07-26 11:17:30.476081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.140 [2024-07-26 11:17:30.476098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.140 qpair failed and we were unable to recover it. 00:29:11.140 [2024-07-26 11:17:30.476638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.140 [2024-07-26 11:17:30.476671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.140 qpair failed and we were unable to recover it. 00:29:11.140 [2024-07-26 11:17:30.477269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.140 [2024-07-26 11:17:30.477304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.140 qpair failed and we were unable to recover it. 00:29:11.140 [2024-07-26 11:17:30.477827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.140 [2024-07-26 11:17:30.477860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.140 qpair failed and we were unable to recover it. 00:29:11.140 [2024-07-26 11:17:30.478344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.140 [2024-07-26 11:17:30.478377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.140 qpair failed and we were unable to recover it. 00:29:11.140 [2024-07-26 11:17:30.478986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.140 [2024-07-26 11:17:30.479019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.140 qpair failed and we were unable to recover it. 00:29:11.140 [2024-07-26 11:17:30.479617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.140 [2024-07-26 11:17:30.479650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.140 qpair failed and we were unable to recover it. 00:29:11.140 [2024-07-26 11:17:30.480156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.140 [2024-07-26 11:17:30.480190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.140 qpair failed and we were unable to recover it. 00:29:11.140 [2024-07-26 11:17:30.480789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.140 [2024-07-26 11:17:30.480805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.140 qpair failed and we were unable to recover it. 00:29:11.140 [2024-07-26 11:17:30.481285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.140 [2024-07-26 11:17:30.481303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.140 qpair failed and we were unable to recover it. 00:29:11.140 [2024-07-26 11:17:30.481843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.140 [2024-07-26 11:17:30.481860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.140 qpair failed and we were unable to recover it. 00:29:11.140 [2024-07-26 11:17:30.482319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.140 [2024-07-26 11:17:30.482352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.140 qpair failed and we were unable to recover it. 00:29:11.140 [2024-07-26 11:17:30.482946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.140 [2024-07-26 11:17:30.482979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.140 qpair failed and we were unable to recover it. 00:29:11.140 [2024-07-26 11:17:30.483539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.140 [2024-07-26 11:17:30.483572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.140 qpair failed and we were unable to recover it. 00:29:11.140 [2024-07-26 11:17:30.484113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.140 [2024-07-26 11:17:30.484156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.140 qpair failed and we were unable to recover it. 00:29:11.140 [2024-07-26 11:17:30.484734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.140 [2024-07-26 11:17:30.484765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.140 qpair failed and we were unable to recover it. 00:29:11.140 [2024-07-26 11:17:30.485346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.140 [2024-07-26 11:17:30.485382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.140 qpair failed and we were unable to recover it. 00:29:11.140 [2024-07-26 11:17:30.485958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.140 [2024-07-26 11:17:30.485991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.140 qpair failed and we were unable to recover it. 00:29:11.140 [2024-07-26 11:17:30.486511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.140 [2024-07-26 11:17:30.486549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.140 qpair failed and we were unable to recover it. 00:29:11.140 [2024-07-26 11:17:30.487083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.140 [2024-07-26 11:17:30.487115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.140 qpair failed and we were unable to recover it. 00:29:11.140 [2024-07-26 11:17:30.487690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.140 [2024-07-26 11:17:30.487722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.140 qpair failed and we were unable to recover it. 00:29:11.140 [2024-07-26 11:17:30.488179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.140 [2024-07-26 11:17:30.488212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.140 qpair failed and we were unable to recover it. 00:29:11.140 [2024-07-26 11:17:30.488727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.140 [2024-07-26 11:17:30.488759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.140 qpair failed and we were unable to recover it. 00:29:11.141 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1612051 Killed "${NVMF_APP[@]}" "$@" 00:29:11.141 [2024-07-26 11:17:30.489264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.141 [2024-07-26 11:17:30.489302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.141 qpair failed and we were unable to recover it. 00:29:11.141 11:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:11.141 [2024-07-26 11:17:30.489814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.141 [2024-07-26 11:17:30.489833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.141 qpair failed and we were unable to recover it. 00:29:11.141 11:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:11.141 [2024-07-26 11:17:30.490325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.141 [2024-07-26 11:17:30.490344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.141 qpair failed and we were unable to recover it. 00:29:11.141 11:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:11.141 11:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:11.141 [2024-07-26 11:17:30.490816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.141 [2024-07-26 11:17:30.490850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.141 qpair failed and we were unable to recover it. 00:29:11.141 11:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:11.141 [2024-07-26 11:17:30.491415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.141 [2024-07-26 11:17:30.491450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.141 qpair failed and we were unable to recover it. 00:29:11.141 [2024-07-26 11:17:30.492024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.141 [2024-07-26 11:17:30.492049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.141 qpair failed and we were unable to recover it. 00:29:11.141 [2024-07-26 11:17:30.492559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.141 [2024-07-26 11:17:30.492591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.141 qpair failed and we were unable to recover it. 00:29:11.141 [2024-07-26 11:17:30.493106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.141 [2024-07-26 11:17:30.493144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.141 qpair failed and we were unable to recover it. 00:29:11.141 [2024-07-26 11:17:30.493660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.141 [2024-07-26 11:17:30.493693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.141 qpair failed and we were unable to recover it. 00:29:11.141 [2024-07-26 11:17:30.494290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.141 [2024-07-26 11:17:30.494323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.141 qpair failed and we were unable to recover it. 00:29:11.141 [2024-07-26 11:17:30.494877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.141 [2024-07-26 11:17:30.494909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.141 qpair failed and we were unable to recover it. 00:29:11.141 [2024-07-26 11:17:30.495458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.141 [2024-07-26 11:17:30.495478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.141 qpair failed and we were unable to recover it. 00:29:11.141 [2024-07-26 11:17:30.495760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.141 [2024-07-26 11:17:30.495777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.141 qpair failed and we were unable to recover it. 00:29:11.141 [2024-07-26 11:17:30.496245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.141 [2024-07-26 11:17:30.496278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.141 qpair failed and we were unable to recover it. 00:29:11.141 [2024-07-26 11:17:30.496722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.141 [2024-07-26 11:17:30.496754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.141 qpair failed and we were unable to recover it. 00:29:11.141 [2024-07-26 11:17:30.497195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.141 [2024-07-26 11:17:30.497231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.141 qpair failed and we were unable to recover it. 00:29:11.141 11:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1612784 00:29:11.141 [2024-07-26 11:17:30.497751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.141 [2024-07-26 11:17:30.497771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.141 qpair failed and we were unable to recover it. 00:29:11.141 11:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1612784 00:29:11.141 11:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:11.141 [2024-07-26 11:17:30.498287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.141 [2024-07-26 11:17:30.498307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.141 qpair failed and we were unable to recover it. 00:29:11.141 11:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1612784 ']' 00:29:11.141 11:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:11.141 11:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:11.141 11:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:11.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:11.141 11:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:11.141 11:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:11.141 [2024-07-26 11:17:30.500548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.141 [2024-07-26 11:17:30.500584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.141 qpair failed and we were unable to recover it. 00:29:11.141 [2024-07-26 11:17:30.501194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.141 [2024-07-26 11:17:30.501227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.141 qpair failed and we were unable to recover it. 00:29:11.141 [2024-07-26 11:17:30.501725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.141 [2024-07-26 11:17:30.501760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.141 qpair failed and we were unable to recover it. 00:29:11.141 [2024-07-26 11:17:30.502337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.141 [2024-07-26 11:17:30.502372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.141 qpair failed and we were unable to recover it. 00:29:11.141 [2024-07-26 11:17:30.502884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.141 [2024-07-26 11:17:30.502916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.141 qpair failed and we were unable to recover it. 00:29:11.141 [2024-07-26 11:17:30.503477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.141 [2024-07-26 11:17:30.503511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.141 qpair failed and we were unable to recover it. 00:29:11.141 [2024-07-26 11:17:30.504013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.141 [2024-07-26 11:17:30.504056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.141 qpair failed and we were unable to recover it. 00:29:11.141 [2024-07-26 11:17:30.504557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.141 [2024-07-26 11:17:30.504588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.141 qpair failed and we were unable to recover it. 00:29:11.141 [2024-07-26 11:17:30.505198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.141 [2024-07-26 11:17:30.505234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.141 qpair failed and we were unable to recover it. 00:29:11.141 [2024-07-26 11:17:30.505677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.141 [2024-07-26 11:17:30.505709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.141 qpair failed and we were unable to recover it. 00:29:11.141 [2024-07-26 11:17:30.506068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.141 [2024-07-26 11:17:30.506101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.141 qpair failed and we were unable to recover it. 00:29:11.141 [2024-07-26 11:17:30.506670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.142 [2024-07-26 11:17:30.506704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.142 qpair failed and we were unable to recover it. 00:29:11.142 [2024-07-26 11:17:30.507215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.142 [2024-07-26 11:17:30.507249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.142 qpair failed and we were unable to recover it. 00:29:11.142 [2024-07-26 11:17:30.507732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.142 [2024-07-26 11:17:30.507761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.142 qpair failed and we were unable to recover it. 00:29:11.142 [2024-07-26 11:17:30.508269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.142 [2024-07-26 11:17:30.508302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.142 qpair failed and we were unable to recover it. 00:29:11.142 [2024-07-26 11:17:30.508837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.142 [2024-07-26 11:17:30.508870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.142 qpair failed and we were unable to recover it. 00:29:11.142 [2024-07-26 11:17:30.509364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.142 [2024-07-26 11:17:30.509396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.142 qpair failed and we were unable to recover it. 00:29:11.142 [2024-07-26 11:17:30.509993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.142 [2024-07-26 11:17:30.510024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.142 qpair failed and we were unable to recover it. 00:29:11.142 [2024-07-26 11:17:30.510571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.142 [2024-07-26 11:17:30.510603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.142 qpair failed and we were unable to recover it. 00:29:11.142 [2024-07-26 11:17:30.511060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.142 [2024-07-26 11:17:30.511092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.142 qpair failed and we were unable to recover it. 00:29:11.142 [2024-07-26 11:17:30.511598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.142 [2024-07-26 11:17:30.511631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.142 qpair failed and we were unable to recover it. 00:29:11.142 [2024-07-26 11:17:30.512200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.142 [2024-07-26 11:17:30.512230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.142 qpair failed and we were unable to recover it. 00:29:11.142 [2024-07-26 11:17:30.512582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.142 [2024-07-26 11:17:30.512608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.142 qpair failed and we were unable to recover it. 00:29:11.142 [2024-07-26 11:17:30.513095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.142 [2024-07-26 11:17:30.513130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.142 qpair failed and we were unable to recover it. 00:29:11.142 [2024-07-26 11:17:30.513733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.142 [2024-07-26 11:17:30.513759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.142 qpair failed and we were unable to recover it. 00:29:11.142 [2024-07-26 11:17:30.514244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.142 [2024-07-26 11:17:30.514286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.142 qpair failed and we were unable to recover it. 00:29:11.142 [2024-07-26 11:17:30.514824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.142 [2024-07-26 11:17:30.514855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.142 qpair failed and we were unable to recover it. 00:29:11.142 [2024-07-26 11:17:30.515358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.142 [2024-07-26 11:17:30.515383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.142 qpair failed and we were unable to recover it. 00:29:11.142 [2024-07-26 11:17:30.515767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.142 [2024-07-26 11:17:30.515790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.142 qpair failed and we were unable to recover it. 00:29:11.142 [2024-07-26 11:17:30.516354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.142 [2024-07-26 11:17:30.516381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.142 qpair failed and we were unable to recover it. 00:29:11.142 [2024-07-26 11:17:30.517159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.142 [2024-07-26 11:17:30.517187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.142 qpair failed and we were unable to recover it. 00:29:11.142 [2024-07-26 11:17:30.517395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.142 [2024-07-26 11:17:30.517414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.142 qpair failed and we were unable to recover it. 00:29:11.142 [2024-07-26 11:17:30.517878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.142 [2024-07-26 11:17:30.517897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.142 qpair failed and we were unable to recover it. 00:29:11.142 [2024-07-26 11:17:30.518177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.142 [2024-07-26 11:17:30.518196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.142 qpair failed and we were unable to recover it. 00:29:11.142 [2024-07-26 11:17:30.518641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.142 [2024-07-26 11:17:30.518661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.142 qpair failed and we were unable to recover it. 00:29:11.142 [2024-07-26 11:17:30.519145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.142 [2024-07-26 11:17:30.519172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.142 qpair failed and we were unable to recover it. 00:29:11.142 [2024-07-26 11:17:30.519636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.142 [2024-07-26 11:17:30.519658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.142 qpair failed and we were unable to recover it. 00:29:11.142 [2024-07-26 11:17:30.520119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.142 [2024-07-26 11:17:30.520144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.142 qpair failed and we were unable to recover it. 00:29:11.142 [2024-07-26 11:17:30.520620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.142 [2024-07-26 11:17:30.520640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.142 qpair failed and we were unable to recover it. 00:29:11.143 [2024-07-26 11:17:30.521064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.143 [2024-07-26 11:17:30.521087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.143 qpair failed and we were unable to recover it. 00:29:11.143 [2024-07-26 11:17:30.521608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.143 [2024-07-26 11:17:30.521635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.143 qpair failed and we were unable to recover it. 00:29:11.143 [2024-07-26 11:17:30.522109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.143 [2024-07-26 11:17:30.522139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.143 qpair failed and we were unable to recover it. 00:29:11.143 [2024-07-26 11:17:30.522477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.143 [2024-07-26 11:17:30.522499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.143 qpair failed and we were unable to recover it. 00:29:11.143 [2024-07-26 11:17:30.522971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.143 [2024-07-26 11:17:30.522999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.143 qpair failed and we were unable to recover it. 00:29:11.143 [2024-07-26 11:17:30.523471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.143 [2024-07-26 11:17:30.523501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.143 qpair failed and we were unable to recover it. 00:29:11.143 [2024-07-26 11:17:30.524000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.143 [2024-07-26 11:17:30.524032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.143 qpair failed and we were unable to recover it. 00:29:11.143 [2024-07-26 11:17:30.528063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.143 [2024-07-26 11:17:30.528097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.143 qpair failed and we were unable to recover it. 00:29:11.143 [2024-07-26 11:17:30.528647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.143 [2024-07-26 11:17:30.528671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.143 qpair failed and we were unable to recover it. 00:29:11.143 [2024-07-26 11:17:30.529229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.143 [2024-07-26 11:17:30.529252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.143 qpair failed and we were unable to recover it. 00:29:11.143 [2024-07-26 11:17:30.529667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.143 [2024-07-26 11:17:30.529689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.143 qpair failed and we were unable to recover it. 00:29:11.143 [2024-07-26 11:17:30.529883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.143 [2024-07-26 11:17:30.529899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.143 qpair failed and we were unable to recover it. 00:29:11.143 [2024-07-26 11:17:30.530469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.143 [2024-07-26 11:17:30.530496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.143 qpair failed and we were unable to recover it. 00:29:11.143 [2024-07-26 11:17:30.531020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.143 [2024-07-26 11:17:30.531057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.143 qpair failed and we were unable to recover it. 00:29:11.143 [2024-07-26 11:17:30.531553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.143 [2024-07-26 11:17:30.531577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.143 qpair failed and we were unable to recover it. 00:29:11.143 [2024-07-26 11:17:30.531814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.143 [2024-07-26 11:17:30.531830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.143 qpair failed and we were unable to recover it. 00:29:11.143 [2024-07-26 11:17:30.532284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.143 [2024-07-26 11:17:30.532309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.143 qpair failed and we were unable to recover it. 00:29:11.143 [2024-07-26 11:17:30.532853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.143 [2024-07-26 11:17:30.532878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.143 qpair failed and we were unable to recover it. 00:29:11.143 [2024-07-26 11:17:30.533392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.143 [2024-07-26 11:17:30.533416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.143 qpair failed and we were unable to recover it. 00:29:11.143 [2024-07-26 11:17:30.533865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.143 [2024-07-26 11:17:30.533886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.143 qpair failed and we were unable to recover it. 00:29:11.143 [2024-07-26 11:17:30.534411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.143 [2024-07-26 11:17:30.534433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.143 qpair failed and we were unable to recover it. 00:29:11.143 [2024-07-26 11:17:30.534908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.143 [2024-07-26 11:17:30.534925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.143 qpair failed and we were unable to recover it. 00:29:11.143 [2024-07-26 11:17:30.535662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.143 [2024-07-26 11:17:30.535687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.143 qpair failed and we were unable to recover it. 00:29:11.143 [2024-07-26 11:17:30.536229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.143 [2024-07-26 11:17:30.536249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.143 qpair failed and we were unable to recover it. 00:29:11.143 [2024-07-26 11:17:30.536655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.143 [2024-07-26 11:17:30.536674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.143 qpair failed and we were unable to recover it. 00:29:11.143 [2024-07-26 11:17:30.537183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.143 [2024-07-26 11:17:30.537200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.143 qpair failed and we were unable to recover it. 00:29:11.143 [2024-07-26 11:17:30.537715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.143 [2024-07-26 11:17:30.537731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.143 qpair failed and we were unable to recover it. 00:29:11.143 [2024-07-26 11:17:30.538204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.143 [2024-07-26 11:17:30.538219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.143 qpair failed and we were unable to recover it. 00:29:11.143 [2024-07-26 11:17:30.538677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.143 [2024-07-26 11:17:30.538690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.143 qpair failed and we were unable to recover it. 00:29:11.143 [2024-07-26 11:17:30.539131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.143 [2024-07-26 11:17:30.539151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.143 qpair failed and we were unable to recover it. 00:29:11.143 [2024-07-26 11:17:30.539649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.143 [2024-07-26 11:17:30.539667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.143 qpair failed and we were unable to recover it. 00:29:11.143 [2024-07-26 11:17:30.539900] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:11.143 [2024-07-26 11:17:30.539940] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:11.143 [2024-07-26 11:17:30.540135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.143 [2024-07-26 11:17:30.540151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.143 qpair failed and we were unable to recover it. 00:29:11.143 [2024-07-26 11:17:30.540626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.143 [2024-07-26 11:17:30.540640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.143 qpair failed and we were unable to recover it. 00:29:11.143 [2024-07-26 11:17:30.541190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.143 [2024-07-26 11:17:30.541206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.143 qpair failed and we were unable to recover it. 00:29:11.143 [2024-07-26 11:17:30.541596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.144 [2024-07-26 11:17:30.541609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.144 qpair failed and we were unable to recover it. 00:29:11.144 [2024-07-26 11:17:30.541843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.144 [2024-07-26 11:17:30.541856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.144 qpair failed and we were unable to recover it. 00:29:11.144 [2024-07-26 11:17:30.542252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.144 [2024-07-26 11:17:30.542268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.144 qpair failed and we were unable to recover it. 00:29:11.144 [2024-07-26 11:17:30.542648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.144 [2024-07-26 11:17:30.542659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.144 qpair failed and we were unable to recover it. 00:29:11.144 [2024-07-26 11:17:30.543126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.144 [2024-07-26 11:17:30.543139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.144 qpair failed and we were unable to recover it. 00:29:11.144 [2024-07-26 11:17:30.543584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.144 [2024-07-26 11:17:30.543594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.144 qpair failed and we were unable to recover it. 00:29:11.144 [2024-07-26 11:17:30.544027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.144 [2024-07-26 11:17:30.544038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.144 qpair failed and we were unable to recover it. 00:29:11.144 [2024-07-26 11:17:30.544492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.144 [2024-07-26 11:17:30.544505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.144 qpair failed and we were unable to recover it. 00:29:11.144 [2024-07-26 11:17:30.544903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.144 [2024-07-26 11:17:30.544921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.144 qpair failed and we were unable to recover it. 00:29:11.144 [2024-07-26 11:17:30.545365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.144 [2024-07-26 11:17:30.545379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.144 qpair failed and we were unable to recover it. 00:29:11.144 [2024-07-26 11:17:30.545766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.144 [2024-07-26 11:17:30.545778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.144 qpair failed and we were unable to recover it. 00:29:11.144 [2024-07-26 11:17:30.546283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.144 [2024-07-26 11:17:30.546305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.144 qpair failed and we were unable to recover it. 00:29:11.144 [2024-07-26 11:17:30.546762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.144 [2024-07-26 11:17:30.546803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.144 qpair failed and we were unable to recover it. 00:29:11.144 [2024-07-26 11:17:30.547273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.144 [2024-07-26 11:17:30.547319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.144 qpair failed and we were unable to recover it. 00:29:11.144 [2024-07-26 11:17:30.547891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.144 [2024-07-26 11:17:30.547907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.144 qpair failed and we were unable to recover it. 00:29:11.144 [2024-07-26 11:17:30.548457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.144 [2024-07-26 11:17:30.548474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.144 qpair failed and we were unable to recover it. 00:29:11.144 [2024-07-26 11:17:30.548995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.144 [2024-07-26 11:17:30.549010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.144 qpair failed and we were unable to recover it. 00:29:11.144 [2024-07-26 11:17:30.549480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.144 [2024-07-26 11:17:30.549496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.144 qpair failed and we were unable to recover it. 00:29:11.144 [2024-07-26 11:17:30.549890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.144 [2024-07-26 11:17:30.549905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.144 qpair failed and we were unable to recover it. 00:29:11.144 [2024-07-26 11:17:30.550191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.144 [2024-07-26 11:17:30.550207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.144 qpair failed and we were unable to recover it. 00:29:11.144 [2024-07-26 11:17:30.550679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.144 [2024-07-26 11:17:30.550695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.144 qpair failed and we were unable to recover it. 00:29:11.144 [2024-07-26 11:17:30.551201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.144 [2024-07-26 11:17:30.551217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.144 qpair failed and we were unable to recover it. 00:29:11.144 [2024-07-26 11:17:30.551736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.144 [2024-07-26 11:17:30.551752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.144 qpair failed and we were unable to recover it. 00:29:11.144 [2024-07-26 11:17:30.552211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.144 [2024-07-26 11:17:30.552227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.144 qpair failed and we were unable to recover it. 00:29:11.144 [2024-07-26 11:17:30.552749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.144 [2024-07-26 11:17:30.552764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.144 qpair failed and we were unable to recover it. 00:29:11.144 [2024-07-26 11:17:30.553284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.144 [2024-07-26 11:17:30.553299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.144 qpair failed and we were unable to recover it. 00:29:11.144 [2024-07-26 11:17:30.553834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.144 [2024-07-26 11:17:30.553849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.144 qpair failed and we were unable to recover it. 00:29:11.144 [2024-07-26 11:17:30.554299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.144 [2024-07-26 11:17:30.554315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.144 qpair failed and we were unable to recover it. 00:29:11.144 [2024-07-26 11:17:30.554498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.144 [2024-07-26 11:17:30.554513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.144 qpair failed and we were unable to recover it. 00:29:11.144 [2024-07-26 11:17:30.554894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.144 [2024-07-26 11:17:30.554909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.144 qpair failed and we were unable to recover it. 00:29:11.144 [2024-07-26 11:17:30.555230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.144 [2024-07-26 11:17:30.555245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.144 qpair failed and we were unable to recover it. 00:29:11.144 [2024-07-26 11:17:30.555705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.144 [2024-07-26 11:17:30.555721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.144 qpair failed and we were unable to recover it. 00:29:11.144 [2024-07-26 11:17:30.556067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.144 [2024-07-26 11:17:30.556083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.144 qpair failed and we were unable to recover it. 00:29:11.144 [2024-07-26 11:17:30.556611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.144 [2024-07-26 11:17:30.556626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.144 qpair failed and we were unable to recover it. 00:29:11.144 [2024-07-26 11:17:30.557091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.144 [2024-07-26 11:17:30.557107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.144 qpair failed and we were unable to recover it. 00:29:11.144 [2024-07-26 11:17:30.557618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.144 [2024-07-26 11:17:30.557633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.144 qpair failed and we were unable to recover it. 00:29:11.144 [2024-07-26 11:17:30.558164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.144 [2024-07-26 11:17:30.558179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.144 qpair failed and we were unable to recover it. 00:29:11.144 [2024-07-26 11:17:30.558653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.145 [2024-07-26 11:17:30.558668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.145 qpair failed and we were unable to recover it. 00:29:11.145 [2024-07-26 11:17:30.559174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.145 [2024-07-26 11:17:30.559189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.145 qpair failed and we were unable to recover it. 00:29:11.145 [2024-07-26 11:17:30.559456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.145 [2024-07-26 11:17:30.559471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.145 qpair failed and we were unable to recover it. 00:29:11.145 [2024-07-26 11:17:30.559863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.145 [2024-07-26 11:17:30.559878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.145 qpair failed and we were unable to recover it. 00:29:11.145 [2024-07-26 11:17:30.560294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.145 [2024-07-26 11:17:30.560309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.145 qpair failed and we were unable to recover it. 00:29:11.145 [2024-07-26 11:17:30.560711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.145 [2024-07-26 11:17:30.560729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.145 qpair failed and we were unable to recover it. 00:29:11.145 [2024-07-26 11:17:30.561260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.145 [2024-07-26 11:17:30.561276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.145 qpair failed and we were unable to recover it. 00:29:11.145 [2024-07-26 11:17:30.561781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.145 [2024-07-26 11:17:30.561797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.145 qpair failed and we were unable to recover it. 00:29:11.145 [2024-07-26 11:17:30.562186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.145 [2024-07-26 11:17:30.562201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.145 qpair failed and we were unable to recover it. 00:29:11.145 [2024-07-26 11:17:30.562705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.145 [2024-07-26 11:17:30.562720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.145 qpair failed and we were unable to recover it. 00:29:11.145 [2024-07-26 11:17:30.563159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.145 [2024-07-26 11:17:30.563174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.145 qpair failed and we were unable to recover it. 00:29:11.145 [2024-07-26 11:17:30.563640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.145 [2024-07-26 11:17:30.563655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.145 qpair failed and we were unable to recover it. 00:29:11.145 [2024-07-26 11:17:30.564180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.145 [2024-07-26 11:17:30.564196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.145 qpair failed and we were unable to recover it. 00:29:11.145 [2024-07-26 11:17:30.564721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.145 [2024-07-26 11:17:30.564737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.145 qpair failed and we were unable to recover it. 00:29:11.145 [2024-07-26 11:17:30.565219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.145 [2024-07-26 11:17:30.565235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.145 qpair failed and we were unable to recover it. 00:29:11.145 [2024-07-26 11:17:30.565740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.145 [2024-07-26 11:17:30.565756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.145 qpair failed and we were unable to recover it. 00:29:11.145 [2024-07-26 11:17:30.566283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.145 [2024-07-26 11:17:30.566298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.145 qpair failed and we were unable to recover it. 00:29:11.145 [2024-07-26 11:17:30.566826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.145 [2024-07-26 11:17:30.566841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.145 qpair failed and we were unable to recover it. 00:29:11.145 [2024-07-26 11:17:30.567325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.145 [2024-07-26 11:17:30.567341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.145 qpair failed and we were unable to recover it. 00:29:11.145 [2024-07-26 11:17:30.567871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.145 [2024-07-26 11:17:30.567886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.145 qpair failed and we were unable to recover it. 00:29:11.145 [2024-07-26 11:17:30.568390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.145 [2024-07-26 11:17:30.568406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.145 qpair failed and we were unable to recover it. 00:29:11.145 [2024-07-26 11:17:30.568815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.145 [2024-07-26 11:17:30.568831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.145 qpair failed and we were unable to recover it. 00:29:11.145 EAL: No free 2048 kB hugepages reported on node 1 00:29:11.145 [2024-07-26 11:17:30.569305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.145 [2024-07-26 11:17:30.569321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.145 qpair failed and we were unable to recover it. 00:29:11.145 [2024-07-26 11:17:30.569645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.145 [2024-07-26 11:17:30.569660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.145 qpair failed and we were unable to recover it. 00:29:11.145 [2024-07-26 11:17:30.570163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.145 [2024-07-26 11:17:30.570178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.145 qpair failed and we were unable to recover it. 00:29:11.145 [2024-07-26 11:17:30.570707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.145 [2024-07-26 11:17:30.570722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.145 qpair failed and we were unable to recover it. 00:29:11.145 [2024-07-26 11:17:30.571251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.145 [2024-07-26 11:17:30.571267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.145 qpair failed and we were unable to recover it. 00:29:11.145 [2024-07-26 11:17:30.571840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.145 [2024-07-26 11:17:30.571855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.145 qpair failed and we were unable to recover it. 00:29:11.145 [2024-07-26 11:17:30.572374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.145 [2024-07-26 11:17:30.572390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.145 qpair failed and we were unable to recover it. 00:29:11.145 [2024-07-26 11:17:30.572842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.145 [2024-07-26 11:17:30.572857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.145 qpair failed and we were unable to recover it. 00:29:11.145 [2024-07-26 11:17:30.573403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.145 [2024-07-26 11:17:30.573419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.145 qpair failed and we were unable to recover it. 00:29:11.145 [2024-07-26 11:17:30.573795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.145 [2024-07-26 11:17:30.573810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.145 qpair failed and we were unable to recover it. 00:29:11.145 [2024-07-26 11:17:30.574340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.145 [2024-07-26 11:17:30.574356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.145 qpair failed and we were unable to recover it. 00:29:11.145 [2024-07-26 11:17:30.574798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.145 [2024-07-26 11:17:30.574813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.145 qpair failed and we were unable to recover it. 00:29:11.145 [2024-07-26 11:17:30.575314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.145 [2024-07-26 11:17:30.575329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.145 qpair failed and we were unable to recover it. 00:29:11.145 [2024-07-26 11:17:30.575776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.145 [2024-07-26 11:17:30.575791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.145 qpair failed and we were unable to recover it. 00:29:11.145 [2024-07-26 11:17:30.576295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.145 [2024-07-26 11:17:30.576311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.145 qpair failed and we were unable to recover it. 00:29:11.145 [2024-07-26 11:17:30.576775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.145 [2024-07-26 11:17:30.576791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.146 qpair failed and we were unable to recover it. 00:29:11.146 [2024-07-26 11:17:30.577240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.146 [2024-07-26 11:17:30.577255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.146 qpair failed and we were unable to recover it. 00:29:11.146 [2024-07-26 11:17:30.577721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.146 [2024-07-26 11:17:30.577737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.146 qpair failed and we were unable to recover it. 00:29:11.146 [2024-07-26 11:17:30.578262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.146 [2024-07-26 11:17:30.578278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.146 qpair failed and we were unable to recover it. 00:29:11.146 [2024-07-26 11:17:30.578803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.146 [2024-07-26 11:17:30.578819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.146 qpair failed and we were unable to recover it. 00:29:11.146 [2024-07-26 11:17:30.579340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.146 [2024-07-26 11:17:30.579356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.146 qpair failed and we were unable to recover it. 00:29:11.146 [2024-07-26 11:17:30.579533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.146 [2024-07-26 11:17:30.579549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.146 qpair failed and we were unable to recover it. 00:29:11.146 [2024-07-26 11:17:30.580057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.146 [2024-07-26 11:17:30.580073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.146 qpair failed and we were unable to recover it. 00:29:11.146 [2024-07-26 11:17:30.580597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.146 [2024-07-26 11:17:30.580615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.146 qpair failed and we were unable to recover it. 00:29:11.146 [2024-07-26 11:17:30.581118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.146 [2024-07-26 11:17:30.581134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.146 qpair failed and we were unable to recover it. 00:29:11.146 [2024-07-26 11:17:30.581633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.146 [2024-07-26 11:17:30.581648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.146 qpair failed and we were unable to recover it. 00:29:11.146 [2024-07-26 11:17:30.582095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.146 [2024-07-26 11:17:30.582110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.146 qpair failed and we were unable to recover it. 00:29:11.146 [2024-07-26 11:17:30.582659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.146 [2024-07-26 11:17:30.582675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.146 qpair failed and we were unable to recover it. 00:29:11.146 [2024-07-26 11:17:30.583128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.146 [2024-07-26 11:17:30.583143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.146 qpair failed and we were unable to recover it. 00:29:11.146 [2024-07-26 11:17:30.583588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.146 [2024-07-26 11:17:30.583603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.146 qpair failed and we were unable to recover it. 00:29:11.146 [2024-07-26 11:17:30.584040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.146 [2024-07-26 11:17:30.584063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.146 qpair failed and we were unable to recover it. 00:29:11.146 [2024-07-26 11:17:30.584509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.146 [2024-07-26 11:17:30.584524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.146 qpair failed and we were unable to recover it. 00:29:11.146 [2024-07-26 11:17:30.585027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.146 [2024-07-26 11:17:30.585048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.146 qpair failed and we were unable to recover it. 00:29:11.146 [2024-07-26 11:17:30.585500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.146 [2024-07-26 11:17:30.585515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.146 qpair failed and we were unable to recover it. 00:29:11.146 [2024-07-26 11:17:30.585959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.146 [2024-07-26 11:17:30.585974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.146 qpair failed and we were unable to recover it. 00:29:11.146 [2024-07-26 11:17:30.586420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.146 [2024-07-26 11:17:30.586436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.146 qpair failed and we were unable to recover it. 00:29:11.146 [2024-07-26 11:17:30.586939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.146 [2024-07-26 11:17:30.586955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.146 qpair failed and we were unable to recover it. 00:29:11.146 [2024-07-26 11:17:30.587294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.146 [2024-07-26 11:17:30.587310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.146 qpair failed and we were unable to recover it. 00:29:11.146 [2024-07-26 11:17:30.587753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.146 [2024-07-26 11:17:30.587768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.146 qpair failed and we were unable to recover it. 00:29:11.146 [2024-07-26 11:17:30.588231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.146 [2024-07-26 11:17:30.588246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.146 qpair failed and we were unable to recover it. 00:29:11.146 [2024-07-26 11:17:30.588747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.146 [2024-07-26 11:17:30.588762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.146 qpair failed and we were unable to recover it. 00:29:11.146 [2024-07-26 11:17:30.589232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.146 [2024-07-26 11:17:30.589247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.146 qpair failed and we were unable to recover it. 00:29:11.146 [2024-07-26 11:17:30.589718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.146 [2024-07-26 11:17:30.589733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.146 qpair failed and we were unable to recover it. 00:29:11.146 [2024-07-26 11:17:30.590180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.146 [2024-07-26 11:17:30.590195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.146 qpair failed and we were unable to recover it. 00:29:11.146 [2024-07-26 11:17:30.590723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.146 [2024-07-26 11:17:30.590738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.146 qpair failed and we were unable to recover it. 00:29:11.146 [2024-07-26 11:17:30.591192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.146 [2024-07-26 11:17:30.591208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.146 qpair failed and we were unable to recover it. 00:29:11.146 [2024-07-26 11:17:30.591677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.146 [2024-07-26 11:17:30.591692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.146 qpair failed and we were unable to recover it. 00:29:11.146 [2024-07-26 11:17:30.592141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.146 [2024-07-26 11:17:30.592156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.146 qpair failed and we were unable to recover it. 00:29:11.146 [2024-07-26 11:17:30.592603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.146 [2024-07-26 11:17:30.592618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.146 qpair failed and we were unable to recover it. 00:29:11.146 [2024-07-26 11:17:30.593121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.146 [2024-07-26 11:17:30.593136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.146 qpair failed and we were unable to recover it. 00:29:11.146 [2024-07-26 11:17:30.593579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.146 [2024-07-26 11:17:30.593594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.146 qpair failed and we were unable to recover it. 00:29:11.146 [2024-07-26 11:17:30.594070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.146 [2024-07-26 11:17:30.594086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.146 qpair failed and we were unable to recover it. 00:29:11.146 [2024-07-26 11:17:30.594530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.146 [2024-07-26 11:17:30.594545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.146 qpair failed and we were unable to recover it. 00:29:11.147 [2024-07-26 11:17:30.594937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.147 [2024-07-26 11:17:30.594952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.147 qpair failed and we were unable to recover it. 00:29:11.147 [2024-07-26 11:17:30.595475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.147 [2024-07-26 11:17:30.595491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.147 qpair failed and we were unable to recover it. 00:29:11.147 [2024-07-26 11:17:30.595662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.147 [2024-07-26 11:17:30.595677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.147 qpair failed and we were unable to recover it. 00:29:11.147 [2024-07-26 11:17:30.596176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.147 [2024-07-26 11:17:30.596190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.147 qpair failed and we were unable to recover it. 00:29:11.147 [2024-07-26 11:17:30.596649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.147 [2024-07-26 11:17:30.596664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.147 qpair failed and we were unable to recover it. 00:29:11.147 [2024-07-26 11:17:30.597124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.147 [2024-07-26 11:17:30.597138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.147 qpair failed and we were unable to recover it. 00:29:11.147 [2024-07-26 11:17:30.597662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.147 [2024-07-26 11:17:30.597676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.147 qpair failed and we were unable to recover it. 00:29:11.147 [2024-07-26 11:17:30.598065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.147 [2024-07-26 11:17:30.598080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.147 qpair failed and we were unable to recover it. 00:29:11.147 [2024-07-26 11:17:30.598548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.147 [2024-07-26 11:17:30.598563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.147 qpair failed and we were unable to recover it. 00:29:11.147 [2024-07-26 11:17:30.598870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.147 [2024-07-26 11:17:30.598885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.147 qpair failed and we were unable to recover it. 00:29:11.147 [2024-07-26 11:17:30.599349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.147 [2024-07-26 11:17:30.599367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.147 qpair failed and we were unable to recover it. 00:29:11.147 [2024-07-26 11:17:30.599524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.147 [2024-07-26 11:17:30.599539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.147 qpair failed and we were unable to recover it. 00:29:11.147 [2024-07-26 11:17:30.600014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.147 [2024-07-26 11:17:30.600029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.147 qpair failed and we were unable to recover it. 00:29:11.147 [2024-07-26 11:17:30.600541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.147 [2024-07-26 11:17:30.600556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.147 qpair failed and we were unable to recover it. 00:29:11.147 [2024-07-26 11:17:30.600892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.147 [2024-07-26 11:17:30.600906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.147 qpair failed and we were unable to recover it. 00:29:11.147 [2024-07-26 11:17:30.601435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.147 [2024-07-26 11:17:30.601451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.147 qpair failed and we were unable to recover it. 00:29:11.147 [2024-07-26 11:17:30.601974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.147 [2024-07-26 11:17:30.601989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.147 qpair failed and we were unable to recover it. 00:29:11.147 [2024-07-26 11:17:30.602513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.147 [2024-07-26 11:17:30.602528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.147 qpair failed and we were unable to recover it. 00:29:11.147 [2024-07-26 11:17:30.602974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.147 [2024-07-26 11:17:30.602989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.147 qpair failed and we were unable to recover it. 00:29:11.147 [2024-07-26 11:17:30.603426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.147 [2024-07-26 11:17:30.603442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.147 qpair failed and we were unable to recover it. 00:29:11.147 [2024-07-26 11:17:30.603677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.147 [2024-07-26 11:17:30.603691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.147 qpair failed and we were unable to recover it. 00:29:11.147 [2024-07-26 11:17:30.604097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.147 [2024-07-26 11:17:30.604112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.148 qpair failed and we were unable to recover it. 00:29:11.148 [2024-07-26 11:17:30.604284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.148 [2024-07-26 11:17:30.604299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.148 qpair failed and we were unable to recover it. 00:29:11.148 [2024-07-26 11:17:30.604761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.148 [2024-07-26 11:17:30.604776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.148 qpair failed and we were unable to recover it. 00:29:11.148 [2024-07-26 11:17:30.605276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.148 [2024-07-26 11:17:30.605291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.148 qpair failed and we were unable to recover it. 00:29:11.148 [2024-07-26 11:17:30.605695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.148 [2024-07-26 11:17:30.605710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.148 qpair failed and we were unable to recover it. 00:29:11.148 [2024-07-26 11:17:30.606144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.148 [2024-07-26 11:17:30.606161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.148 qpair failed and we were unable to recover it. 00:29:11.148 [2024-07-26 11:17:30.606551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.148 [2024-07-26 11:17:30.606566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.148 qpair failed and we were unable to recover it. 00:29:11.148 [2024-07-26 11:17:30.606820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.148 [2024-07-26 11:17:30.606835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.148 qpair failed and we were unable to recover it. 00:29:11.148 [2024-07-26 11:17:30.607356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.148 [2024-07-26 11:17:30.607372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.148 qpair failed and we were unable to recover it. 00:29:11.148 [2024-07-26 11:17:30.607875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.148 [2024-07-26 11:17:30.607890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.148 qpair failed and we were unable to recover it. 00:29:11.148 [2024-07-26 11:17:30.608336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.148 [2024-07-26 11:17:30.608351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.148 qpair failed and we were unable to recover it. 00:29:11.148 [2024-07-26 11:17:30.608851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.148 [2024-07-26 11:17:30.608866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.148 qpair failed and we were unable to recover it. 00:29:11.148 [2024-07-26 11:17:30.609325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.148 [2024-07-26 11:17:30.609340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.148 qpair failed and we were unable to recover it. 00:29:11.148 [2024-07-26 11:17:30.609803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.148 [2024-07-26 11:17:30.609835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.148 qpair failed and we were unable to recover it. 00:29:11.148 [2024-07-26 11:17:30.610284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.148 [2024-07-26 11:17:30.610299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.148 qpair failed and we were unable to recover it. 00:29:11.148 [2024-07-26 11:17:30.610820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.148 [2024-07-26 11:17:30.610835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.148 qpair failed and we were unable to recover it. 00:29:11.148 [2024-07-26 11:17:30.611235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.148 [2024-07-26 11:17:30.611250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.148 qpair failed and we were unable to recover it. 00:29:11.148 [2024-07-26 11:17:30.611697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.148 [2024-07-26 11:17:30.611713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.148 qpair failed and we were unable to recover it. 00:29:11.148 [2024-07-26 11:17:30.612198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.148 [2024-07-26 11:17:30.612214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.148 qpair failed and we were unable to recover it. 00:29:11.148 [2024-07-26 11:17:30.612711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.148 [2024-07-26 11:17:30.612727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.148 qpair failed and we were unable to recover it. 00:29:11.148 [2024-07-26 11:17:30.613243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.148 [2024-07-26 11:17:30.613259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.148 qpair failed and we were unable to recover it. 00:29:11.148 [2024-07-26 11:17:30.613693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.148 [2024-07-26 11:17:30.613708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.148 qpair failed and we were unable to recover it. 00:29:11.148 [2024-07-26 11:17:30.614155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.148 [2024-07-26 11:17:30.614170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.148 qpair failed and we were unable to recover it. 00:29:11.148 [2024-07-26 11:17:30.614616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.148 [2024-07-26 11:17:30.614631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.148 qpair failed and we were unable to recover it. 00:29:11.148 [2024-07-26 11:17:30.615153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.148 [2024-07-26 11:17:30.615168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.148 qpair failed and we were unable to recover it. 00:29:11.148 [2024-07-26 11:17:30.615712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.148 [2024-07-26 11:17:30.615727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.148 qpair failed and we were unable to recover it. 00:29:11.148 [2024-07-26 11:17:30.616243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.148 [2024-07-26 11:17:30.616259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.148 qpair failed and we were unable to recover it. 00:29:11.148 [2024-07-26 11:17:30.616548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:11.148 [2024-07-26 11:17:30.616718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.148 [2024-07-26 11:17:30.616734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.148 qpair failed and we were unable to recover it. 00:29:11.148 [2024-07-26 11:17:30.617258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.148 [2024-07-26 11:17:30.617274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.148 qpair failed and we were unable to recover it. 00:29:11.148 [2024-07-26 11:17:30.617730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.148 [2024-07-26 11:17:30.617745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.148 qpair failed and we were unable to recover it. 00:29:11.148 [2024-07-26 11:17:30.618267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.148 [2024-07-26 11:17:30.618282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.148 qpair failed and we were unable to recover it. 00:29:11.148 [2024-07-26 11:17:30.618758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.148 [2024-07-26 11:17:30.618774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.148 qpair failed and we were unable to recover it. 00:29:11.148 [2024-07-26 11:17:30.619305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.148 [2024-07-26 11:17:30.619320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.148 qpair failed and we were unable to recover it. 00:29:11.148 [2024-07-26 11:17:30.619817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.148 [2024-07-26 11:17:30.619833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.148 qpair failed and we were unable to recover it. 00:29:11.148 [2024-07-26 11:17:30.620288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.148 [2024-07-26 11:17:30.620304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.148 qpair failed and we were unable to recover it. 00:29:11.148 [2024-07-26 11:17:30.620530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.149 [2024-07-26 11:17:30.620546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.149 qpair failed and we were unable to recover it. 00:29:11.149 [2024-07-26 11:17:30.621010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.149 [2024-07-26 11:17:30.621025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.149 qpair failed and we were unable to recover it. 00:29:11.417 [2024-07-26 11:17:30.621549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.417 [2024-07-26 11:17:30.621567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.417 [2024-07-26 11:17:30.621891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.417 [2024-07-26 11:17:30.621906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.417 [2024-07-26 11:17:30.622314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.417 [2024-07-26 11:17:30.622330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.417 [2024-07-26 11:17:30.622852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.417 [2024-07-26 11:17:30.622867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.417 [2024-07-26 11:17:30.623387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.417 [2024-07-26 11:17:30.623403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.417 qpair failed and we were unable to recover it. 00:29:11.418 [2024-07-26 11:17:30.623800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-07-26 11:17:30.623817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-07-26 11:17:30.624205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-07-26 11:17:30.624221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-07-26 11:17:30.624742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-07-26 11:17:30.624758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-07-26 11:17:30.625206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-07-26 11:17:30.625222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-07-26 11:17:30.625695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-07-26 11:17:30.625712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-07-26 11:17:30.626240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-07-26 11:17:30.626259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-07-26 11:17:30.626516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-07-26 11:17:30.626532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-07-26 11:17:30.626982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-07-26 11:17:30.626997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-07-26 11:17:30.627380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-07-26 11:17:30.627395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-07-26 11:17:30.627784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-07-26 11:17:30.627800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-07-26 11:17:30.628323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-07-26 11:17:30.628339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-07-26 11:17:30.628793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-07-26 11:17:30.628808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-07-26 11:17:30.629343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-07-26 11:17:30.629359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-07-26 11:17:30.629814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-07-26 11:17:30.629829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-07-26 11:17:30.630338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-07-26 11:17:30.630354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-07-26 11:17:30.630731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-07-26 11:17:30.630746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-07-26 11:17:30.631250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-07-26 11:17:30.631266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-07-26 11:17:30.631723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-07-26 11:17:30.631738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-07-26 11:17:30.632141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-07-26 11:17:30.632155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-07-26 11:17:30.632672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-07-26 11:17:30.632687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-07-26 11:17:30.633139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-07-26 11:17:30.633154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-07-26 11:17:30.633656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-07-26 11:17:30.633671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-07-26 11:17:30.633923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-07-26 11:17:30.633938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-07-26 11:17:30.634370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-07-26 11:17:30.634386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-07-26 11:17:30.634916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-07-26 11:17:30.634931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-07-26 11:17:30.635429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-07-26 11:17:30.635445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-07-26 11:17:30.635965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-07-26 11:17:30.635980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-07-26 11:17:30.636481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-07-26 11:17:30.636497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-07-26 11:17:30.636997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-07-26 11:17:30.637012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-07-26 11:17:30.637454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-07-26 11:17:30.637469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-07-26 11:17:30.637903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-07-26 11:17:30.637919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-07-26 11:17:30.638419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-07-26 11:17:30.638434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-07-26 11:17:30.638884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-07-26 11:17:30.638900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-07-26 11:17:30.639421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.418 [2024-07-26 11:17:30.639437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.418 qpair failed and we were unable to recover it. 00:29:11.418 [2024-07-26 11:17:30.639904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-07-26 11:17:30.639919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-07-26 11:17:30.640309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-07-26 11:17:30.640324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-07-26 11:17:30.640849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-07-26 11:17:30.640864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-07-26 11:17:30.641306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-07-26 11:17:30.641322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-07-26 11:17:30.641762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-07-26 11:17:30.641776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-07-26 11:17:30.642146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-07-26 11:17:30.642161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-07-26 11:17:30.642661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-07-26 11:17:30.642682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-07-26 11:17:30.642998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-07-26 11:17:30.643013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-07-26 11:17:30.643540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-07-26 11:17:30.643556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-07-26 11:17:30.643998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-07-26 11:17:30.644013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-07-26 11:17:30.644512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-07-26 11:17:30.644527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-07-26 11:17:30.644821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-07-26 11:17:30.644836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-07-26 11:17:30.645359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-07-26 11:17:30.645375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-07-26 11:17:30.645825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-07-26 11:17:30.645841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-07-26 11:17:30.646354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-07-26 11:17:30.646370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-07-26 11:17:30.646821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-07-26 11:17:30.646837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-07-26 11:17:30.647339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-07-26 11:17:30.647354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-07-26 11:17:30.647865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-07-26 11:17:30.647880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-07-26 11:17:30.648341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-07-26 11:17:30.648357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-07-26 11:17:30.648859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-07-26 11:17:30.648874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-07-26 11:17:30.649374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-07-26 11:17:30.649390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-07-26 11:17:30.649906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-07-26 11:17:30.649922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-07-26 11:17:30.650449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-07-26 11:17:30.650465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-07-26 11:17:30.650969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-07-26 11:17:30.650984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-07-26 11:17:30.651439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-07-26 11:17:30.651455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-07-26 11:17:30.651984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-07-26 11:17:30.651999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-07-26 11:17:30.652439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-07-26 11:17:30.652455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-07-26 11:17:30.652906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-07-26 11:17:30.652921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-07-26 11:17:30.653395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-07-26 11:17:30.653413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-07-26 11:17:30.653937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-07-26 11:17:30.653960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-07-26 11:17:30.654151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-07-26 11:17:30.654169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-07-26 11:17:30.654623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-07-26 11:17:30.654642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-07-26 11:17:30.655097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-07-26 11:17:30.655116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-07-26 11:17:30.655658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-07-26 11:17:30.655677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-07-26 11:17:30.656206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-07-26 11:17:30.656226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-07-26 11:17:30.656726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-07-26 11:17:30.656743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-07-26 11:17:30.657061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-07-26 11:17:30.657077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.419 qpair failed and we were unable to recover it. 00:29:11.419 [2024-07-26 11:17:30.657477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.419 [2024-07-26 11:17:30.657494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-07-26 11:17:30.658018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-07-26 11:17:30.658035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-07-26 11:17:30.658361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-07-26 11:17:30.658377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-07-26 11:17:30.658822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-07-26 11:17:30.658838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-07-26 11:17:30.659340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-07-26 11:17:30.659357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-07-26 11:17:30.659748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-07-26 11:17:30.659765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-07-26 11:17:30.660306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-07-26 11:17:30.660323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-07-26 11:17:30.660775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-07-26 11:17:30.660791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-07-26 11:17:30.661317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-07-26 11:17:30.661335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-07-26 11:17:30.661847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-07-26 11:17:30.661869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-07-26 11:17:30.662255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-07-26 11:17:30.662274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-07-26 11:17:30.662754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-07-26 11:17:30.662771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-07-26 11:17:30.663240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-07-26 11:17:30.663257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-07-26 11:17:30.663705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-07-26 11:17:30.663721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-07-26 11:17:30.664187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-07-26 11:17:30.664203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-07-26 11:17:30.664645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-07-26 11:17:30.664660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-07-26 11:17:30.665036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-07-26 11:17:30.665056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-07-26 11:17:30.665554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-07-26 11:17:30.665570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-07-26 11:17:30.666069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-07-26 11:17:30.666085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-07-26 11:17:30.666568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-07-26 11:17:30.666583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-07-26 11:17:30.667034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-07-26 11:17:30.667056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-07-26 11:17:30.667233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-07-26 11:17:30.667248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-07-26 11:17:30.667783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-07-26 11:17:30.667798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-07-26 11:17:30.668262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-07-26 11:17:30.668278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-07-26 11:17:30.668714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-07-26 11:17:30.668729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-07-26 11:17:30.669181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-07-26 11:17:30.669197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-07-26 11:17:30.669731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-07-26 11:17:30.669746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-07-26 11:17:30.670116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-07-26 11:17:30.670132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-07-26 11:17:30.670573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-07-26 11:17:30.670588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-07-26 11:17:30.671035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-07-26 11:17:30.671054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-07-26 11:17:30.671497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-07-26 11:17:30.671513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-07-26 11:17:30.672009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-07-26 11:17:30.672024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-07-26 11:17:30.672549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-07-26 11:17:30.672564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-07-26 11:17:30.673088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-07-26 11:17:30.673104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-07-26 11:17:30.673556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-07-26 11:17:30.673571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-07-26 11:17:30.674021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-07-26 11:17:30.674036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-07-26 11:17:30.674569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.420 [2024-07-26 11:17:30.674584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.420 qpair failed and we were unable to recover it. 00:29:11.420 [2024-07-26 11:17:30.675102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-07-26 11:17:30.675117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-07-26 11:17:30.675563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-07-26 11:17:30.675577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-07-26 11:17:30.676055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-07-26 11:17:30.676070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-07-26 11:17:30.676510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-07-26 11:17:30.676526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-07-26 11:17:30.676749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-07-26 11:17:30.676764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-07-26 11:17:30.677233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-07-26 11:17:30.677249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-07-26 11:17:30.677646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-07-26 11:17:30.677661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-07-26 11:17:30.678124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-07-26 11:17:30.678139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-07-26 11:17:30.678677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-07-26 11:17:30.678692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-07-26 11:17:30.679144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-07-26 11:17:30.679160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-07-26 11:17:30.679386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-07-26 11:17:30.679401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-07-26 11:17:30.679872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-07-26 11:17:30.679887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-07-26 11:17:30.680275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-07-26 11:17:30.680304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-07-26 11:17:30.680770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-07-26 11:17:30.680785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-07-26 11:17:30.681308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-07-26 11:17:30.681324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-07-26 11:17:30.681791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-07-26 11:17:30.681806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-07-26 11:17:30.682206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-07-26 11:17:30.682222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-07-26 11:17:30.682662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-07-26 11:17:30.682677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-07-26 11:17:30.683135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-07-26 11:17:30.683150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-07-26 11:17:30.683654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-07-26 11:17:30.683669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-07-26 11:17:30.683835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-07-26 11:17:30.683851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-07-26 11:17:30.684294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-07-26 11:17:30.684309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-07-26 11:17:30.684808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-07-26 11:17:30.684824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-07-26 11:17:30.685268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-07-26 11:17:30.685283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-07-26 11:17:30.685726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-07-26 11:17:30.685741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-07-26 11:17:30.686283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-07-26 11:17:30.686298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-07-26 11:17:30.686744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-07-26 11:17:30.686759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-07-26 11:17:30.687267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-07-26 11:17:30.687282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-07-26 11:17:30.687717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-07-26 11:17:30.687732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-07-26 11:17:30.688193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-07-26 11:17:30.688210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-07-26 11:17:30.688737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-07-26 11:17:30.688752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-07-26 11:17:30.689186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-07-26 11:17:30.689201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-07-26 11:17:30.689702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-07-26 11:17:30.689717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-07-26 11:17:30.690170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-07-26 11:17:30.690186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-07-26 11:17:30.690660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-07-26 11:17:30.690676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-07-26 11:17:30.691058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-07-26 11:17:30.691074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.421 [2024-07-26 11:17:30.691596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.421 [2024-07-26 11:17:30.691611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.421 qpair failed and we were unable to recover it. 00:29:11.422 [2024-07-26 11:17:30.691653] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:11.422 [2024-07-26 11:17:30.691685] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:11.422 [2024-07-26 11:17:30.691692] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:11.422 [2024-07-26 11:17:30.691699] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:11.422 [2024-07-26 11:17:30.691704] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:11.422 [2024-07-26 11:17:30.691815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:11.422 [2024-07-26 11:17:30.691921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:11.422 [2024-07-26 11:17:30.692028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:11.422 [2024-07-26 11:17:30.692110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-07-26 11:17:30.692126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-07-26 11:17:30.692029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:11.422 [2024-07-26 11:17:30.692626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-07-26 11:17:30.692641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-07-26 11:17:30.693143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-07-26 11:17:30.693158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-07-26 11:17:30.693555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-07-26 11:17:30.693570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-07-26 11:17:30.694015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-07-26 11:17:30.694030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-07-26 11:17:30.694485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-07-26 11:17:30.694500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-07-26 11:17:30.695023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-07-26 11:17:30.695038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-07-26 11:17:30.695561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-07-26 11:17:30.695576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-07-26 11:17:30.696025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-07-26 11:17:30.696040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-07-26 11:17:30.696568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-07-26 11:17:30.696583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-07-26 11:17:30.697131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-07-26 11:17:30.697147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-07-26 11:17:30.697646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-07-26 11:17:30.697662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-07-26 11:17:30.698123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-07-26 11:17:30.698142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-07-26 11:17:30.698618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-07-26 11:17:30.698633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-07-26 11:17:30.698879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-07-26 11:17:30.698895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-07-26 11:17:30.699419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-07-26 11:17:30.699435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-07-26 11:17:30.699810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-07-26 11:17:30.699824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-07-26 11:17:30.700346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-07-26 11:17:30.700362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-07-26 11:17:30.700799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-07-26 11:17:30.700813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-07-26 11:17:30.701336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-07-26 11:17:30.701352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-07-26 11:17:30.701848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-07-26 11:17:30.701863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-07-26 11:17:30.702257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-07-26 11:17:30.702273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-07-26 11:17:30.702705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-07-26 11:17:30.702720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-07-26 11:17:30.703217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-07-26 11:17:30.703233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-07-26 11:17:30.703745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.422 [2024-07-26 11:17:30.703760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.422 qpair failed and we were unable to recover it. 00:29:11.422 [2024-07-26 11:17:30.704202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-07-26 11:17:30.704218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-07-26 11:17:30.704672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-07-26 11:17:30.704688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-07-26 11:17:30.705041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-07-26 11:17:30.705063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-07-26 11:17:30.705563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-07-26 11:17:30.705581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-07-26 11:17:30.706103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-07-26 11:17:30.706120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-07-26 11:17:30.706587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-07-26 11:17:30.706604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-07-26 11:17:30.707037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-07-26 11:17:30.707060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-07-26 11:17:30.707528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-07-26 11:17:30.707544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-07-26 11:17:30.707984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-07-26 11:17:30.708001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-07-26 11:17:30.708473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-07-26 11:17:30.708490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-07-26 11:17:30.708940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-07-26 11:17:30.708958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-07-26 11:17:30.709432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-07-26 11:17:30.709452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-07-26 11:17:30.709886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-07-26 11:17:30.709903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-07-26 11:17:30.710350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-07-26 11:17:30.710366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-07-26 11:17:30.710874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-07-26 11:17:30.710893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-07-26 11:17:30.711277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-07-26 11:17:30.711293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-07-26 11:17:30.711791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-07-26 11:17:30.711807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-07-26 11:17:30.712280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-07-26 11:17:30.712297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-07-26 11:17:30.712820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-07-26 11:17:30.712837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-07-26 11:17:30.713357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-07-26 11:17:30.713374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-07-26 11:17:30.713825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-07-26 11:17:30.713842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-07-26 11:17:30.714365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-07-26 11:17:30.714382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-07-26 11:17:30.714618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-07-26 11:17:30.714634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-07-26 11:17:30.715157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-07-26 11:17:30.715174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-07-26 11:17:30.715692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-07-26 11:17:30.715708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-07-26 11:17:30.716200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-07-26 11:17:30.716216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-07-26 11:17:30.716631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-07-26 11:17:30.716647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-07-26 11:17:30.717159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-07-26 11:17:30.717182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-07-26 11:17:30.717627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-07-26 11:17:30.717643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-07-26 11:17:30.717920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-07-26 11:17:30.717935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-07-26 11:17:30.718387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-07-26 11:17:30.718404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-07-26 11:17:30.718868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-07-26 11:17:30.718884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.423 qpair failed and we were unable to recover it. 00:29:11.423 [2024-07-26 11:17:30.719351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.423 [2024-07-26 11:17:30.719367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-07-26 11:17:30.719828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-07-26 11:17:30.719843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-07-26 11:17:30.720248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-07-26 11:17:30.720263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-07-26 11:17:30.720712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-07-26 11:17:30.720728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-07-26 11:17:30.721181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-07-26 11:17:30.721199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-07-26 11:17:30.721644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-07-26 11:17:30.721659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-07-26 11:17:30.722187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-07-26 11:17:30.722203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-07-26 11:17:30.722668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-07-26 11:17:30.722683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-07-26 11:17:30.723188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-07-26 11:17:30.723204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-07-26 11:17:30.723737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-07-26 11:17:30.723753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-07-26 11:17:30.724219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-07-26 11:17:30.724235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-07-26 11:17:30.724508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-07-26 11:17:30.724524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-07-26 11:17:30.725020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-07-26 11:17:30.725037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-07-26 11:17:30.725539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-07-26 11:17:30.725555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-07-26 11:17:30.726067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-07-26 11:17:30.726083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-07-26 11:17:30.726541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-07-26 11:17:30.726556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-07-26 11:17:30.727026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-07-26 11:17:30.727041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-07-26 11:17:30.727497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-07-26 11:17:30.727512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-07-26 11:17:30.728011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-07-26 11:17:30.728027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-07-26 11:17:30.728500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-07-26 11:17:30.728515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-07-26 11:17:30.729038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-07-26 11:17:30.729063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-07-26 11:17:30.729568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-07-26 11:17:30.729584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-07-26 11:17:30.730107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-07-26 11:17:30.730122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-07-26 11:17:30.730650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-07-26 11:17:30.730665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-07-26 11:17:30.731060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-07-26 11:17:30.731076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-07-26 11:17:30.731522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-07-26 11:17:30.731537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-07-26 11:17:30.731989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-07-26 11:17:30.732004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-07-26 11:17:30.732502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-07-26 11:17:30.732518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-07-26 11:17:30.732970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-07-26 11:17:30.732986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-07-26 11:17:30.733438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-07-26 11:17:30.733456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-07-26 11:17:30.733979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-07-26 11:17:30.733995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-07-26 11:17:30.734514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-07-26 11:17:30.734531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-07-26 11:17:30.734925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-07-26 11:17:30.734940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-07-26 11:17:30.735203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-07-26 11:17:30.735220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-07-26 11:17:30.735728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-07-26 11:17:30.735744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-07-26 11:17:30.736243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.424 [2024-07-26 11:17:30.736264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.424 qpair failed and we were unable to recover it. 00:29:11.424 [2024-07-26 11:17:30.736784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-07-26 11:17:30.736801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-07-26 11:17:30.736976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-07-26 11:17:30.736993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-07-26 11:17:30.737441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-07-26 11:17:30.737459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-07-26 11:17:30.737668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-07-26 11:17:30.737684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-07-26 11:17:30.738172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-07-26 11:17:30.738189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-07-26 11:17:30.738640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-07-26 11:17:30.738657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-07-26 11:17:30.739105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-07-26 11:17:30.739122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-07-26 11:17:30.739519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-07-26 11:17:30.739536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-07-26 11:17:30.739978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-07-26 11:17:30.739994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-07-26 11:17:30.740518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-07-26 11:17:30.740535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-07-26 11:17:30.741039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-07-26 11:17:30.741064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-07-26 11:17:30.741519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-07-26 11:17:30.741535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-07-26 11:17:30.742033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-07-26 11:17:30.742060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-07-26 11:17:30.742587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-07-26 11:17:30.742603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-07-26 11:17:30.743127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-07-26 11:17:30.743143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-07-26 11:17:30.743711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-07-26 11:17:30.743728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-07-26 11:17:30.744254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-07-26 11:17:30.744270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-07-26 11:17:30.744768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-07-26 11:17:30.744784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-07-26 11:17:30.745182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-07-26 11:17:30.745199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-07-26 11:17:30.745724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-07-26 11:17:30.745740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-07-26 11:17:30.746194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-07-26 11:17:30.746209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-07-26 11:17:30.746687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-07-26 11:17:30.746703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-07-26 11:17:30.747200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-07-26 11:17:30.747216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-07-26 11:17:30.747716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-07-26 11:17:30.747731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-07-26 11:17:30.748217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-07-26 11:17:30.748232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-07-26 11:17:30.748681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-07-26 11:17:30.748696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-07-26 11:17:30.748886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-07-26 11:17:30.748901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-07-26 11:17:30.749425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-07-26 11:17:30.749442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-07-26 11:17:30.749967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-07-26 11:17:30.749982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-07-26 11:17:30.750433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-07-26 11:17:30.750448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-07-26 11:17:30.750835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-07-26 11:17:30.750850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-07-26 11:17:30.751110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-07-26 11:17:30.751126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.425 [2024-07-26 11:17:30.751562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.425 [2024-07-26 11:17:30.751577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.425 qpair failed and we were unable to recover it. 00:29:11.426 [2024-07-26 11:17:30.752102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-07-26 11:17:30.752117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-07-26 11:17:30.752617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-07-26 11:17:30.752633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-07-26 11:17:30.753132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-07-26 11:17:30.753148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-07-26 11:17:30.753642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-07-26 11:17:30.753658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-07-26 11:17:30.754035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-07-26 11:17:30.754055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-07-26 11:17:30.754578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-07-26 11:17:30.754593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-07-26 11:17:30.755115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-07-26 11:17:30.755133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-07-26 11:17:30.755655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-07-26 11:17:30.755671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-07-26 11:17:30.756143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-07-26 11:17:30.756158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-07-26 11:17:30.756481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-07-26 11:17:30.756496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-07-26 11:17:30.757036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-07-26 11:17:30.757064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-07-26 11:17:30.757568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-07-26 11:17:30.757583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-07-26 11:17:30.757968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-07-26 11:17:30.757982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-07-26 11:17:30.758505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-07-26 11:17:30.758520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-07-26 11:17:30.758982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-07-26 11:17:30.758997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-07-26 11:17:30.759393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-07-26 11:17:30.759409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-07-26 11:17:30.759865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-07-26 11:17:30.759879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-07-26 11:17:30.760277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-07-26 11:17:30.760293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-07-26 11:17:30.760677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-07-26 11:17:30.760692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-07-26 11:17:30.761253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-07-26 11:17:30.761269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-07-26 11:17:30.761795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-07-26 11:17:30.761810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-07-26 11:17:30.762248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-07-26 11:17:30.762263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-07-26 11:17:30.762726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-07-26 11:17:30.762741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-07-26 11:17:30.763477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-07-26 11:17:30.763494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-07-26 11:17:30.763953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-07-26 11:17:30.763968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-07-26 11:17:30.764431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-07-26 11:17:30.764446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-07-26 11:17:30.764942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-07-26 11:17:30.764957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-07-26 11:17:30.765462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-07-26 11:17:30.765478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-07-26 11:17:30.765910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-07-26 11:17:30.765925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-07-26 11:17:30.766303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-07-26 11:17:30.766318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-07-26 11:17:30.766779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-07-26 11:17:30.766794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-07-26 11:17:30.767245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-07-26 11:17:30.767260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-07-26 11:17:30.767761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-07-26 11:17:30.767775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-07-26 11:17:30.768325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-07-26 11:17:30.768340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-07-26 11:17:30.768784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-07-26 11:17:30.768799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-07-26 11:17:30.769246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-07-26 11:17:30.769262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-07-26 11:17:30.769812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.426 [2024-07-26 11:17:30.769827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.426 qpair failed and we were unable to recover it. 00:29:11.426 [2024-07-26 11:17:30.770224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-07-26 11:17:30.770239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-07-26 11:17:30.770680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-07-26 11:17:30.770695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-07-26 11:17:30.771164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-07-26 11:17:30.771179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-07-26 11:17:30.771678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-07-26 11:17:30.771693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-07-26 11:17:30.772081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-07-26 11:17:30.772096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-07-26 11:17:30.772605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-07-26 11:17:30.772620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-07-26 11:17:30.773123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-07-26 11:17:30.773139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-07-26 11:17:30.773638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-07-26 11:17:30.773653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-07-26 11:17:30.774157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-07-26 11:17:30.774172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-07-26 11:17:30.774556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-07-26 11:17:30.774574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-07-26 11:17:30.774894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-07-26 11:17:30.774909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-07-26 11:17:30.775433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-07-26 11:17:30.775448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-07-26 11:17:30.775911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-07-26 11:17:30.775926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-07-26 11:17:30.776451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-07-26 11:17:30.776466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-07-26 11:17:30.776898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-07-26 11:17:30.776913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-07-26 11:17:30.777377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-07-26 11:17:30.777394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-07-26 11:17:30.777779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-07-26 11:17:30.777794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-07-26 11:17:30.778244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-07-26 11:17:30.778259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-07-26 11:17:30.778763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-07-26 11:17:30.778778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-07-26 11:17:30.779171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-07-26 11:17:30.779187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-07-26 11:17:30.779456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-07-26 11:17:30.779471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-07-26 11:17:30.779861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-07-26 11:17:30.779876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-07-26 11:17:30.780412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-07-26 11:17:30.780427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-07-26 11:17:30.780902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-07-26 11:17:30.780918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-07-26 11:17:30.781323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-07-26 11:17:30.781339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-07-26 11:17:30.781795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.427 [2024-07-26 11:17:30.781810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.427 qpair failed and we were unable to recover it. 00:29:11.427 [2024-07-26 11:17:30.782277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-07-26 11:17:30.782292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-07-26 11:17:30.782461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-07-26 11:17:30.782475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-07-26 11:17:30.782854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-07-26 11:17:30.782869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-07-26 11:17:30.783316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-07-26 11:17:30.783332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-07-26 11:17:30.783699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-07-26 11:17:30.783713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-07-26 11:17:30.784159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-07-26 11:17:30.784184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-07-26 11:17:30.784588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-07-26 11:17:30.784603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-07-26 11:17:30.785031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-07-26 11:17:30.785053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-07-26 11:17:30.785457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-07-26 11:17:30.785471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-07-26 11:17:30.785907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-07-26 11:17:30.785922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-07-26 11:17:30.786370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-07-26 11:17:30.786385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-07-26 11:17:30.786829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-07-26 11:17:30.786844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-07-26 11:17:30.787304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-07-26 11:17:30.787319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-07-26 11:17:30.787839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-07-26 11:17:30.787854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-07-26 11:17:30.788371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-07-26 11:17:30.788387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-07-26 11:17:30.788829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-07-26 11:17:30.788844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-07-26 11:17:30.789346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-07-26 11:17:30.789362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-07-26 11:17:30.789905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-07-26 11:17:30.789920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-07-26 11:17:30.790375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-07-26 11:17:30.790390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-07-26 11:17:30.790779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-07-26 11:17:30.790794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-07-26 11:17:30.791312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-07-26 11:17:30.791327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-07-26 11:17:30.791826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-07-26 11:17:30.791842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-07-26 11:17:30.792284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-07-26 11:17:30.792299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-07-26 11:17:30.792797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-07-26 11:17:30.792814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-07-26 11:17:30.793221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-07-26 11:17:30.793237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-07-26 11:17:30.793635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-07-26 11:17:30.793650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-07-26 11:17:30.794110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-07-26 11:17:30.794126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-07-26 11:17:30.794506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-07-26 11:17:30.794522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-07-26 11:17:30.794986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-07-26 11:17:30.795001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-07-26 11:17:30.795447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-07-26 11:17:30.795462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-07-26 11:17:30.795907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-07-26 11:17:30.795922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-07-26 11:17:30.796375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-07-26 11:17:30.796390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-07-26 11:17:30.796768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-07-26 11:17:30.796783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-07-26 11:17:30.797248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-07-26 11:17:30.797264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-07-26 11:17:30.797783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-07-26 11:17:30.797798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-07-26 11:17:30.798264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-07-26 11:17:30.798280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-07-26 11:17:30.798677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.428 [2024-07-26 11:17:30.798691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.428 qpair failed and we were unable to recover it. 00:29:11.428 [2024-07-26 11:17:30.799143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-07-26 11:17:30.799159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-07-26 11:17:30.799603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-07-26 11:17:30.799617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-07-26 11:17:30.800061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-07-26 11:17:30.800076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-07-26 11:17:30.800520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-07-26 11:17:30.800535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-07-26 11:17:30.800921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-07-26 11:17:30.800936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-07-26 11:17:30.801410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-07-26 11:17:30.801427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-07-26 11:17:30.801799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-07-26 11:17:30.801814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-07-26 11:17:30.802286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-07-26 11:17:30.802301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-07-26 11:17:30.802750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-07-26 11:17:30.802765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-07-26 11:17:30.803213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-07-26 11:17:30.803228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-07-26 11:17:30.803675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-07-26 11:17:30.803690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-07-26 11:17:30.804153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-07-26 11:17:30.804168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-07-26 11:17:30.804618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-07-26 11:17:30.804633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-07-26 11:17:30.805013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-07-26 11:17:30.805029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-07-26 11:17:30.805539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-07-26 11:17:30.805555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-07-26 11:17:30.806062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-07-26 11:17:30.806078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-07-26 11:17:30.806300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-07-26 11:17:30.806315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-07-26 11:17:30.806753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-07-26 11:17:30.806768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-07-26 11:17:30.807287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-07-26 11:17:30.807302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-07-26 11:17:30.807746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-07-26 11:17:30.807761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-07-26 11:17:30.808208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-07-26 11:17:30.808223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-07-26 11:17:30.808747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-07-26 11:17:30.808762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-07-26 11:17:30.809199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-07-26 11:17:30.809216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-07-26 11:17:30.809719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-07-26 11:17:30.809733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-07-26 11:17:30.810245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-07-26 11:17:30.810260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-07-26 11:17:30.810652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-07-26 11:17:30.810667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-07-26 11:17:30.811191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-07-26 11:17:30.811209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-07-26 11:17:30.811592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-07-26 11:17:30.811607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-07-26 11:17:30.812104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-07-26 11:17:30.812120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-07-26 11:17:30.812559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-07-26 11:17:30.812574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-07-26 11:17:30.812949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-07-26 11:17:30.812964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-07-26 11:17:30.813486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-07-26 11:17:30.813502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.429 [2024-07-26 11:17:30.813957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.429 [2024-07-26 11:17:30.813972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.429 qpair failed and we were unable to recover it. 00:29:11.430 [2024-07-26 11:17:30.814354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-07-26 11:17:30.814369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-07-26 11:17:30.814864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-07-26 11:17:30.814879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-07-26 11:17:30.815400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-07-26 11:17:30.815415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-07-26 11:17:30.815878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-07-26 11:17:30.815894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-07-26 11:17:30.816411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-07-26 11:17:30.816426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-07-26 11:17:30.816813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-07-26 11:17:30.816828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-07-26 11:17:30.817351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-07-26 11:17:30.817368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-07-26 11:17:30.817825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-07-26 11:17:30.817839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-07-26 11:17:30.818308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-07-26 11:17:30.818323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-07-26 11:17:30.818772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-07-26 11:17:30.818787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-07-26 11:17:30.819285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-07-26 11:17:30.819301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-07-26 11:17:30.819697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-07-26 11:17:30.819712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-07-26 11:17:30.820148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-07-26 11:17:30.820163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-07-26 11:17:30.820688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-07-26 11:17:30.820703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-07-26 11:17:30.821203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-07-26 11:17:30.821219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-07-26 11:17:30.821723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-07-26 11:17:30.821738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-07-26 11:17:30.822154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-07-26 11:17:30.822169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-07-26 11:17:30.822611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-07-26 11:17:30.822626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-07-26 11:17:30.823010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-07-26 11:17:30.823025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-07-26 11:17:30.823502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-07-26 11:17:30.823517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-07-26 11:17:30.823897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-07-26 11:17:30.823911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-07-26 11:17:30.824313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-07-26 11:17:30.824328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-07-26 11:17:30.824849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-07-26 11:17:30.824864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-07-26 11:17:30.825315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-07-26 11:17:30.825331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-07-26 11:17:30.825832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-07-26 11:17:30.825847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-07-26 11:17:30.826283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-07-26 11:17:30.826298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-07-26 11:17:30.826801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-07-26 11:17:30.826816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-07-26 11:17:30.827218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-07-26 11:17:30.827233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-07-26 11:17:30.827692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-07-26 11:17:30.827707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-07-26 11:17:30.828138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-07-26 11:17:30.828153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-07-26 11:17:30.828612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-07-26 11:17:30.828627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-07-26 11:17:30.829091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-07-26 11:17:30.829107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-07-26 11:17:30.829491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-07-26 11:17:30.829506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-07-26 11:17:30.830026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-07-26 11:17:30.830064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-07-26 11:17:30.830566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-07-26 11:17:30.830581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-07-26 11:17:30.831084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-07-26 11:17:30.831100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-07-26 11:17:30.831600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.430 [2024-07-26 11:17:30.831615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.430 qpair failed and we were unable to recover it. 00:29:11.430 [2024-07-26 11:17:30.832062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-07-26 11:17:30.832078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-07-26 11:17:30.832448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-07-26 11:17:30.832463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-07-26 11:17:30.832897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-07-26 11:17:30.832912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-07-26 11:17:30.833452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-07-26 11:17:30.833469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-07-26 11:17:30.833922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-07-26 11:17:30.833937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-07-26 11:17:30.834438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-07-26 11:17:30.834453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-07-26 11:17:30.834850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-07-26 11:17:30.834865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-07-26 11:17:30.835309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-07-26 11:17:30.835324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-07-26 11:17:30.835846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-07-26 11:17:30.835861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-07-26 11:17:30.836322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-07-26 11:17:30.836337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-07-26 11:17:30.836730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-07-26 11:17:30.836744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-07-26 11:17:30.836993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-07-26 11:17:30.837008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-07-26 11:17:30.837459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-07-26 11:17:30.837476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-07-26 11:17:30.837862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-07-26 11:17:30.837877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-07-26 11:17:30.838329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-07-26 11:17:30.838345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-07-26 11:17:30.838786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-07-26 11:17:30.838801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-07-26 11:17:30.839320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-07-26 11:17:30.839335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-07-26 11:17:30.839834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-07-26 11:17:30.839849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-07-26 11:17:30.840369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-07-26 11:17:30.840384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-07-26 11:17:30.840834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-07-26 11:17:30.840849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-07-26 11:17:30.841382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-07-26 11:17:30.841398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-07-26 11:17:30.841930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-07-26 11:17:30.841945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-07-26 11:17:30.842442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-07-26 11:17:30.842457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-07-26 11:17:30.842908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-07-26 11:17:30.842923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-07-26 11:17:30.843422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-07-26 11:17:30.843437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-07-26 11:17:30.843987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-07-26 11:17:30.844002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-07-26 11:17:30.844522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-07-26 11:17:30.844538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-07-26 11:17:30.845081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-07-26 11:17:30.845097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-07-26 11:17:30.845563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-07-26 11:17:30.845578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-07-26 11:17:30.846051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-07-26 11:17:30.846066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-07-26 11:17:30.846506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-07-26 11:17:30.846520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-07-26 11:17:30.846791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-07-26 11:17:30.846806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-07-26 11:17:30.847186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-07-26 11:17:30.847201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-07-26 11:17:30.847728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-07-26 11:17:30.847743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.431 [2024-07-26 11:17:30.848125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.431 [2024-07-26 11:17:30.848140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.431 qpair failed and we were unable to recover it. 00:29:11.432 [2024-07-26 11:17:30.848587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-07-26 11:17:30.848602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-07-26 11:17:30.849055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-07-26 11:17:30.849072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-07-26 11:17:30.849544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-07-26 11:17:30.849559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-07-26 11:17:30.850021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-07-26 11:17:30.850036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-07-26 11:17:30.850423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-07-26 11:17:30.850438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-07-26 11:17:30.850910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-07-26 11:17:30.850926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-07-26 11:17:30.851424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-07-26 11:17:30.851439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-07-26 11:17:30.851840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-07-26 11:17:30.851856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-07-26 11:17:30.852376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-07-26 11:17:30.852391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-07-26 11:17:30.852549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-07-26 11:17:30.852563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-07-26 11:17:30.853069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-07-26 11:17:30.853086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-07-26 11:17:30.853557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-07-26 11:17:30.853571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-07-26 11:17:30.854040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-07-26 11:17:30.854061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-07-26 11:17:30.854509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-07-26 11:17:30.854524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-07-26 11:17:30.855022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-07-26 11:17:30.855038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-07-26 11:17:30.855513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-07-26 11:17:30.855529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-07-26 11:17:30.855910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-07-26 11:17:30.855925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-07-26 11:17:30.856377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-07-26 11:17:30.856392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-07-26 11:17:30.856763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-07-26 11:17:30.856778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-07-26 11:17:30.857299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-07-26 11:17:30.857315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-07-26 11:17:30.857814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-07-26 11:17:30.857829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-07-26 11:17:30.858348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-07-26 11:17:30.858364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-07-26 11:17:30.858534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-07-26 11:17:30.858549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-07-26 11:17:30.859001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-07-26 11:17:30.859016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-07-26 11:17:30.859543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-07-26 11:17:30.859559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-07-26 11:17:30.860057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-07-26 11:17:30.860072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-07-26 11:17:30.860466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-07-26 11:17:30.860481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-07-26 11:17:30.861002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.432 [2024-07-26 11:17:30.861017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.432 qpair failed and we were unable to recover it. 00:29:11.432 [2024-07-26 11:17:30.861463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.433 [2024-07-26 11:17:30.861482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.433 qpair failed and we were unable to recover it. 00:29:11.433 [2024-07-26 11:17:30.861953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.433 [2024-07-26 11:17:30.861968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.433 qpair failed and we were unable to recover it. 00:29:11.433 [2024-07-26 11:17:30.862486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.433 [2024-07-26 11:17:30.862501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.433 qpair failed and we were unable to recover it. 00:29:11.433 [2024-07-26 11:17:30.863008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.433 [2024-07-26 11:17:30.863023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.433 qpair failed and we were unable to recover it. 00:29:11.433 [2024-07-26 11:17:30.863249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.433 [2024-07-26 11:17:30.863264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.433 qpair failed and we were unable to recover it. 00:29:11.433 [2024-07-26 11:17:30.863630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.433 [2024-07-26 11:17:30.863645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.433 qpair failed and we were unable to recover it. 00:29:11.433 [2024-07-26 11:17:30.864158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.433 [2024-07-26 11:17:30.864173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.433 qpair failed and we were unable to recover it. 00:29:11.433 [2024-07-26 11:17:30.864680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.433 [2024-07-26 11:17:30.864694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.433 qpair failed and we were unable to recover it. 00:29:11.433 [2024-07-26 11:17:30.865161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.433 [2024-07-26 11:17:30.865177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.433 qpair failed and we were unable to recover it. 00:29:11.433 [2024-07-26 11:17:30.865700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.433 [2024-07-26 11:17:30.865715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.433 qpair failed and we were unable to recover it. 00:29:11.433 [2024-07-26 11:17:30.866149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.433 [2024-07-26 11:17:30.866164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.433 qpair failed and we were unable to recover it. 00:29:11.433 [2024-07-26 11:17:30.866631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.433 [2024-07-26 11:17:30.866645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.433 qpair failed and we were unable to recover it. 00:29:11.433 [2024-07-26 11:17:30.867083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.433 [2024-07-26 11:17:30.867099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.433 qpair failed and we were unable to recover it. 00:29:11.433 [2024-07-26 11:17:30.867550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.433 [2024-07-26 11:17:30.867566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.433 qpair failed and we were unable to recover it. 00:29:11.433 [2024-07-26 11:17:30.868093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.433 [2024-07-26 11:17:30.868109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.433 qpair failed and we were unable to recover it. 00:29:11.433 [2024-07-26 11:17:30.868632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.433 [2024-07-26 11:17:30.868646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.433 qpair failed and we were unable to recover it. 00:29:11.433 [2024-07-26 11:17:30.869050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.433 [2024-07-26 11:17:30.869069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.433 qpair failed and we were unable to recover it. 00:29:11.433 [2024-07-26 11:17:30.869519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.433 [2024-07-26 11:17:30.869534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.433 qpair failed and we were unable to recover it. 00:29:11.433 [2024-07-26 11:17:30.870036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.433 [2024-07-26 11:17:30.870059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.433 qpair failed and we were unable to recover it. 00:29:11.433 [2024-07-26 11:17:30.870499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.433 [2024-07-26 11:17:30.870514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.433 qpair failed and we were unable to recover it. 00:29:11.433 [2024-07-26 11:17:30.870972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.433 [2024-07-26 11:17:30.870987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.433 qpair failed and we were unable to recover it. 00:29:11.433 [2024-07-26 11:17:30.871507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.433 [2024-07-26 11:17:30.871522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.433 qpair failed and we were unable to recover it. 00:29:11.433 [2024-07-26 11:17:30.872049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.433 [2024-07-26 11:17:30.872064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.433 qpair failed and we were unable to recover it. 00:29:11.433 [2024-07-26 11:17:30.872521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.433 [2024-07-26 11:17:30.872536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.433 qpair failed and we were unable to recover it. 00:29:11.433 [2024-07-26 11:17:30.872971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.433 [2024-07-26 11:17:30.872986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.433 qpair failed and we were unable to recover it. 00:29:11.433 [2024-07-26 11:17:30.873431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.433 [2024-07-26 11:17:30.873448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.433 qpair failed and we were unable to recover it. 00:29:11.433 [2024-07-26 11:17:30.873966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.433 [2024-07-26 11:17:30.873982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.433 qpair failed and we were unable to recover it. 00:29:11.433 [2024-07-26 11:17:30.874505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.433 [2024-07-26 11:17:30.874521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.433 qpair failed and we were unable to recover it. 00:29:11.433 [2024-07-26 11:17:30.875065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.433 [2024-07-26 11:17:30.875081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.433 qpair failed and we were unable to recover it. 00:29:11.433 [2024-07-26 11:17:30.875639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.433 [2024-07-26 11:17:30.875654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.433 qpair failed and we were unable to recover it. 00:29:11.433 [2024-07-26 11:17:30.876203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.433 [2024-07-26 11:17:30.876218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.433 qpair failed and we were unable to recover it. 00:29:11.433 [2024-07-26 11:17:30.876718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.433 [2024-07-26 11:17:30.876733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.433 qpair failed and we were unable to recover it. 00:29:11.433 [2024-07-26 11:17:30.877230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.433 [2024-07-26 11:17:30.877249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.433 qpair failed and we were unable to recover it. 00:29:11.433 [2024-07-26 11:17:30.877750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.433 [2024-07-26 11:17:30.877764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.433 qpair failed and we were unable to recover it. 00:29:11.433 [2024-07-26 11:17:30.878289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.433 [2024-07-26 11:17:30.878303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.433 qpair failed and we were unable to recover it. 00:29:11.433 [2024-07-26 11:17:30.878752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.433 [2024-07-26 11:17:30.878766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.433 qpair failed and we were unable to recover it. 00:29:11.433 [2024-07-26 11:17:30.879216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.433 [2024-07-26 11:17:30.879231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.433 qpair failed and we were unable to recover it. 00:29:11.433 [2024-07-26 11:17:30.879663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.433 [2024-07-26 11:17:30.879677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.434 qpair failed and we were unable to recover it. 00:29:11.434 [2024-07-26 11:17:30.880119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.434 [2024-07-26 11:17:30.880132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.434 qpair failed and we were unable to recover it. 00:29:11.434 [2024-07-26 11:17:30.880567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.434 [2024-07-26 11:17:30.880581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.434 qpair failed and we were unable to recover it. 00:29:11.434 [2024-07-26 11:17:30.881100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.434 [2024-07-26 11:17:30.881120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.434 qpair failed and we were unable to recover it. 00:29:11.434 [2024-07-26 11:17:30.881624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.434 [2024-07-26 11:17:30.881637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.434 qpair failed and we were unable to recover it. 00:29:11.434 [2024-07-26 11:17:30.882056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.434 [2024-07-26 11:17:30.882072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.434 qpair failed and we were unable to recover it. 00:29:11.434 [2024-07-26 11:17:30.882510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.434 [2024-07-26 11:17:30.882524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.434 qpair failed and we were unable to recover it. 00:29:11.434 [2024-07-26 11:17:30.882981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.434 [2024-07-26 11:17:30.882995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.434 qpair failed and we were unable to recover it. 00:29:11.434 [2024-07-26 11:17:30.883436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.434 [2024-07-26 11:17:30.883449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.434 qpair failed and we were unable to recover it. 00:29:11.434 [2024-07-26 11:17:30.883971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.434 [2024-07-26 11:17:30.883985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.434 qpair failed and we were unable to recover it. 00:29:11.434 [2024-07-26 11:17:30.884434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.434 [2024-07-26 11:17:30.884449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.434 qpair failed and we were unable to recover it. 00:29:11.434 [2024-07-26 11:17:30.885236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.434 [2024-07-26 11:17:30.885252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.434 qpair failed and we were unable to recover it. 00:29:11.434 [2024-07-26 11:17:30.885754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.434 [2024-07-26 11:17:30.885767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.434 qpair failed and we were unable to recover it. 00:29:11.434 [2024-07-26 11:17:30.886234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.434 [2024-07-26 11:17:30.886248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.434 qpair failed and we were unable to recover it. 00:29:11.434 [2024-07-26 11:17:30.886699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.434 [2024-07-26 11:17:30.886712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.434 qpair failed and we were unable to recover it. 00:29:11.434 [2024-07-26 11:17:30.887164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.434 [2024-07-26 11:17:30.887178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.434 qpair failed and we were unable to recover it. 00:29:11.434 [2024-07-26 11:17:30.887702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.434 [2024-07-26 11:17:30.887716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.434 qpair failed and we were unable to recover it. 00:29:11.434 [2024-07-26 11:17:30.888241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.434 [2024-07-26 11:17:30.888256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.434 qpair failed and we were unable to recover it. 00:29:11.434 [2024-07-26 11:17:30.888710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.434 [2024-07-26 11:17:30.888724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.434 qpair failed and we were unable to recover it. 00:29:11.434 [2024-07-26 11:17:30.889186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.434 [2024-07-26 11:17:30.889201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.434 qpair failed and we were unable to recover it. 00:29:11.434 [2024-07-26 11:17:30.889642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.434 [2024-07-26 11:17:30.889655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.434 qpair failed and we were unable to recover it. 00:29:11.434 [2024-07-26 11:17:30.890089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.434 [2024-07-26 11:17:30.890103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.434 qpair failed and we were unable to recover it. 00:29:11.434 [2024-07-26 11:17:30.890544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.434 [2024-07-26 11:17:30.890557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.434 qpair failed and we were unable to recover it. 00:29:11.434 [2024-07-26 11:17:30.891063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.434 [2024-07-26 11:17:30.891077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.434 qpair failed and we were unable to recover it. 00:29:11.434 [2024-07-26 11:17:30.891619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.434 [2024-07-26 11:17:30.891632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.434 qpair failed and we were unable to recover it. 00:29:11.434 [2024-07-26 11:17:30.892154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.434 [2024-07-26 11:17:30.892168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.434 qpair failed and we were unable to recover it. 00:29:11.434 [2024-07-26 11:17:30.892583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.434 [2024-07-26 11:17:30.892596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.434 qpair failed and we were unable to recover it. 00:29:11.434 [2024-07-26 11:17:30.893038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.434 [2024-07-26 11:17:30.893062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.434 qpair failed and we were unable to recover it. 00:29:11.434 [2024-07-26 11:17:30.893508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.434 [2024-07-26 11:17:30.893522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.434 qpair failed and we were unable to recover it. 00:29:11.434 [2024-07-26 11:17:30.894063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.434 [2024-07-26 11:17:30.894078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.434 qpair failed and we were unable to recover it. 00:29:11.434 [2024-07-26 11:17:30.894579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.434 [2024-07-26 11:17:30.894592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.434 qpair failed and we were unable to recover it. 00:29:11.434 [2024-07-26 11:17:30.895041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.434 [2024-07-26 11:17:30.895060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.434 qpair failed and we were unable to recover it. 00:29:11.434 [2024-07-26 11:17:30.895586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.434 [2024-07-26 11:17:30.895599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.434 qpair failed and we were unable to recover it. 00:29:11.434 [2024-07-26 11:17:30.896120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.434 [2024-07-26 11:17:30.896134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.434 qpair failed and we were unable to recover it. 00:29:11.434 [2024-07-26 11:17:30.896632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.434 [2024-07-26 11:17:30.896645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.434 qpair failed and we were unable to recover it. 00:29:11.434 [2024-07-26 11:17:30.897159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.434 [2024-07-26 11:17:30.897174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.434 qpair failed and we were unable to recover it. 00:29:11.434 [2024-07-26 11:17:30.897616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.434 [2024-07-26 11:17:30.897630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.434 qpair failed and we were unable to recover it. 00:29:11.434 [2024-07-26 11:17:30.898079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.434 [2024-07-26 11:17:30.898093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.434 qpair failed and we were unable to recover it. 00:29:11.434 [2024-07-26 11:17:30.898633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.435 [2024-07-26 11:17:30.898646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.435 qpair failed and we were unable to recover it. 00:29:11.435 [2024-07-26 11:17:30.899156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.435 [2024-07-26 11:17:30.899170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.435 qpair failed and we were unable to recover it. 00:29:11.435 [2024-07-26 11:17:30.899687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.435 [2024-07-26 11:17:30.899701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.435 qpair failed and we were unable to recover it. 00:29:11.435 [2024-07-26 11:17:30.900149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.435 [2024-07-26 11:17:30.900163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.435 qpair failed and we were unable to recover it. 00:29:11.435 [2024-07-26 11:17:30.900638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.435 [2024-07-26 11:17:30.900651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.435 qpair failed and we were unable to recover it. 00:29:11.435 [2024-07-26 11:17:30.901032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.435 [2024-07-26 11:17:30.901065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.435 qpair failed and we were unable to recover it. 00:29:11.435 [2024-07-26 11:17:30.901572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.435 [2024-07-26 11:17:30.901586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.435 qpair failed and we were unable to recover it. 00:29:11.435 [2024-07-26 11:17:30.902111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.435 [2024-07-26 11:17:30.902125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.435 qpair failed and we were unable to recover it. 00:29:11.435 [2024-07-26 11:17:30.902527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.435 [2024-07-26 11:17:30.902540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.435 qpair failed and we were unable to recover it. 00:29:11.435 [2024-07-26 11:17:30.902864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.435 [2024-07-26 11:17:30.902878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.435 qpair failed and we were unable to recover it. 00:29:11.435 [2024-07-26 11:17:30.903407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.435 [2024-07-26 11:17:30.903421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.435 qpair failed and we were unable to recover it. 00:29:11.435 [2024-07-26 11:17:30.903836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.435 [2024-07-26 11:17:30.903849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.435 qpair failed and we were unable to recover it. 00:29:11.705 [2024-07-26 11:17:30.904291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.705 [2024-07-26 11:17:30.904307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.705 qpair failed and we were unable to recover it. 00:29:11.705 [2024-07-26 11:17:30.904706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.705 [2024-07-26 11:17:30.904720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.705 qpair failed and we were unable to recover it. 00:29:11.705 [2024-07-26 11:17:30.905222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.705 [2024-07-26 11:17:30.905239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.705 qpair failed and we were unable to recover it. 00:29:11.705 [2024-07-26 11:17:30.905682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.705 [2024-07-26 11:17:30.905696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.705 qpair failed and we were unable to recover it. 00:29:11.705 [2024-07-26 11:17:30.905913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.705 [2024-07-26 11:17:30.905927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.705 qpair failed and we were unable to recover it. 00:29:11.705 [2024-07-26 11:17:30.906324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.705 [2024-07-26 11:17:30.906340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.705 qpair failed and we were unable to recover it. 00:29:11.705 [2024-07-26 11:17:30.906790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.705 [2024-07-26 11:17:30.906804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.705 qpair failed and we were unable to recover it. 00:29:11.705 [2024-07-26 11:17:30.907253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.705 [2024-07-26 11:17:30.907267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.705 qpair failed and we were unable to recover it. 00:29:11.705 [2024-07-26 11:17:30.907642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.705 [2024-07-26 11:17:30.907655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.705 qpair failed and we were unable to recover it. 00:29:11.705 [2024-07-26 11:17:30.908203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.705 [2024-07-26 11:17:30.908217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.705 qpair failed and we were unable to recover it. 00:29:11.705 [2024-07-26 11:17:30.908672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.705 [2024-07-26 11:17:30.908686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.705 qpair failed and we were unable to recover it. 00:29:11.705 [2024-07-26 11:17:30.909079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.705 [2024-07-26 11:17:30.909094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.705 qpair failed and we were unable to recover it. 00:29:11.705 [2024-07-26 11:17:30.909488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.705 [2024-07-26 11:17:30.909503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.705 qpair failed and we were unable to recover it. 00:29:11.705 [2024-07-26 11:17:30.909979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.705 [2024-07-26 11:17:30.909993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.705 qpair failed and we were unable to recover it. 00:29:11.705 [2024-07-26 11:17:30.910463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.705 [2024-07-26 11:17:30.910477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.705 qpair failed and we were unable to recover it. 00:29:11.705 [2024-07-26 11:17:30.910887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.705 [2024-07-26 11:17:30.910901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.705 qpair failed and we were unable to recover it. 00:29:11.705 [2024-07-26 11:17:30.911368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.705 [2024-07-26 11:17:30.911382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.705 qpair failed and we were unable to recover it. 00:29:11.705 [2024-07-26 11:17:30.911828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.705 [2024-07-26 11:17:30.911842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.705 qpair failed and we were unable to recover it. 00:29:11.705 [2024-07-26 11:17:30.912298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.705 [2024-07-26 11:17:30.912312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.705 qpair failed and we were unable to recover it. 00:29:11.705 [2024-07-26 11:17:30.912701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.705 [2024-07-26 11:17:30.912714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.705 qpair failed and we were unable to recover it. 00:29:11.705 [2024-07-26 11:17:30.913049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.705 [2024-07-26 11:17:30.913067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.705 qpair failed and we were unable to recover it. 00:29:11.705 [2024-07-26 11:17:30.913514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.705 [2024-07-26 11:17:30.913527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.705 qpair failed and we were unable to recover it. 00:29:11.705 [2024-07-26 11:17:30.914025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.705 [2024-07-26 11:17:30.914038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.705 qpair failed and we were unable to recover it. 00:29:11.705 [2024-07-26 11:17:30.914489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.705 [2024-07-26 11:17:30.914503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.705 qpair failed and we were unable to recover it. 00:29:11.705 [2024-07-26 11:17:30.915002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.705 [2024-07-26 11:17:30.915015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.705 qpair failed and we were unable to recover it. 00:29:11.705 [2024-07-26 11:17:30.915568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.705 [2024-07-26 11:17:30.915582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.705 qpair failed and we were unable to recover it. 00:29:11.705 [2024-07-26 11:17:30.916080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.705 [2024-07-26 11:17:30.916095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.705 qpair failed and we were unable to recover it. 00:29:11.705 [2024-07-26 11:17:30.916540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.705 [2024-07-26 11:17:30.916554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.705 qpair failed and we were unable to recover it. 00:29:11.705 [2024-07-26 11:17:30.916957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.705 [2024-07-26 11:17:30.916973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.705 qpair failed and we were unable to recover it. 00:29:11.705 [2024-07-26 11:17:30.917361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.705 [2024-07-26 11:17:30.917376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.705 qpair failed and we were unable to recover it. 00:29:11.705 [2024-07-26 11:17:30.917825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.705 [2024-07-26 11:17:30.917839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.705 qpair failed and we were unable to recover it. 00:29:11.705 [2024-07-26 11:17:30.918329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.705 [2024-07-26 11:17:30.918343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.705 qpair failed and we were unable to recover it. 00:29:11.705 [2024-07-26 11:17:30.918782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-07-26 11:17:30.918795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-07-26 11:17:30.919253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-07-26 11:17:30.919270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-07-26 11:17:30.919781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-07-26 11:17:30.919795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-07-26 11:17:30.920244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-07-26 11:17:30.920260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-07-26 11:17:30.920720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-07-26 11:17:30.920733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-07-26 11:17:30.921132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-07-26 11:17:30.921147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-07-26 11:17:30.921666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-07-26 11:17:30.921680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-07-26 11:17:30.922138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-07-26 11:17:30.922152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-07-26 11:17:30.922557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-07-26 11:17:30.922570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-07-26 11:17:30.923071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-07-26 11:17:30.923086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-07-26 11:17:30.923535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-07-26 11:17:30.923549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-07-26 11:17:30.924024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-07-26 11:17:30.924037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-07-26 11:17:30.924431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-07-26 11:17:30.924445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-07-26 11:17:30.924971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-07-26 11:17:30.924984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-07-26 11:17:30.925429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-07-26 11:17:30.925444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-07-26 11:17:30.925839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-07-26 11:17:30.925852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-07-26 11:17:30.926369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-07-26 11:17:30.926383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-07-26 11:17:30.926887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-07-26 11:17:30.926902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-07-26 11:17:30.927284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-07-26 11:17:30.927298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-07-26 11:17:30.927747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-07-26 11:17:30.927761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-07-26 11:17:30.928262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-07-26 11:17:30.928277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-07-26 11:17:30.928730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-07-26 11:17:30.928744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-07-26 11:17:30.929189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-07-26 11:17:30.929205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-07-26 11:17:30.929705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-07-26 11:17:30.929719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-07-26 11:17:30.930163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-07-26 11:17:30.930177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-07-26 11:17:30.930570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-07-26 11:17:30.930584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-07-26 11:17:30.930978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-07-26 11:17:30.930992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-07-26 11:17:30.931536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-07-26 11:17:30.931550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-07-26 11:17:30.932004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-07-26 11:17:30.932018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-07-26 11:17:30.932411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-07-26 11:17:30.932427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-07-26 11:17:30.932874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-07-26 11:17:30.932888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-07-26 11:17:30.933413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-07-26 11:17:30.933428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-07-26 11:17:30.933900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-07-26 11:17:30.933914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-07-26 11:17:30.934393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-07-26 11:17:30.934407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-07-26 11:17:30.934856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-07-26 11:17:30.934870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-07-26 11:17:30.935323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-07-26 11:17:30.935338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-07-26 11:17:30.935784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.706 [2024-07-26 11:17:30.935798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.706 qpair failed and we were unable to recover it. 00:29:11.706 [2024-07-26 11:17:30.936188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-07-26 11:17:30.936202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-07-26 11:17:30.936702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-07-26 11:17:30.936717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-07-26 11:17:30.937217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-07-26 11:17:30.937232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-07-26 11:17:30.937753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-07-26 11:17:30.937767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-07-26 11:17:30.938246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-07-26 11:17:30.938263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-07-26 11:17:30.938735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-07-26 11:17:30.938749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-07-26 11:17:30.939144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-07-26 11:17:30.939159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-07-26 11:17:30.939610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-07-26 11:17:30.939624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-07-26 11:17:30.940071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-07-26 11:17:30.940085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-07-26 11:17:30.940466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-07-26 11:17:30.940480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-07-26 11:17:30.940924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-07-26 11:17:30.940937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-07-26 11:17:30.941335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-07-26 11:17:30.941350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-07-26 11:17:30.941808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-07-26 11:17:30.941821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-07-26 11:17:30.942268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-07-26 11:17:30.942282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-07-26 11:17:30.942732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-07-26 11:17:30.942746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-07-26 11:17:30.943148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-07-26 11:17:30.943162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-07-26 11:17:30.943604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-07-26 11:17:30.943618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-07-26 11:17:30.943997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-07-26 11:17:30.944011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-07-26 11:17:30.944390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-07-26 11:17:30.944403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-07-26 11:17:30.944845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-07-26 11:17:30.944858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-07-26 11:17:30.945361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-07-26 11:17:30.945377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-07-26 11:17:30.945879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-07-26 11:17:30.945893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-07-26 11:17:30.946276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-07-26 11:17:30.946290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-07-26 11:17:30.946736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-07-26 11:17:30.946751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-07-26 11:17:30.947070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-07-26 11:17:30.947084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-07-26 11:17:30.947477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-07-26 11:17:30.947490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-07-26 11:17:30.947990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-07-26 11:17:30.948003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-07-26 11:17:30.948454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-07-26 11:17:30.948467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-07-26 11:17:30.948696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-07-26 11:17:30.948710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-07-26 11:17:30.949110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-07-26 11:17:30.949125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-07-26 11:17:30.949589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-07-26 11:17:30.949603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-07-26 11:17:30.950128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-07-26 11:17:30.950142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-07-26 11:17:30.950591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-07-26 11:17:30.950605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-07-26 11:17:30.950999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-07-26 11:17:30.951013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-07-26 11:17:30.951444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-07-26 11:17:30.951458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-07-26 11:17:30.951903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-07-26 11:17:30.951917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.707 [2024-07-26 11:17:30.952366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.707 [2024-07-26 11:17:30.952381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.707 qpair failed and we were unable to recover it. 00:29:11.708 [2024-07-26 11:17:30.952813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-07-26 11:17:30.952827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-07-26 11:17:30.953314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-07-26 11:17:30.953329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-07-26 11:17:30.953772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-07-26 11:17:30.953786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-07-26 11:17:30.954188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-07-26 11:17:30.954202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-07-26 11:17:30.954726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-07-26 11:17:30.954740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-07-26 11:17:30.955118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-07-26 11:17:30.955131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-07-26 11:17:30.955583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-07-26 11:17:30.955596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-07-26 11:17:30.956053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-07-26 11:17:30.956071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-07-26 11:17:30.956466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-07-26 11:17:30.956480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-07-26 11:17:30.956859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-07-26 11:17:30.956872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-07-26 11:17:30.957303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-07-26 11:17:30.957318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-07-26 11:17:30.957859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-07-26 11:17:30.957873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-07-26 11:17:30.958432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-07-26 11:17:30.958446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-07-26 11:17:30.958785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-07-26 11:17:30.958798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-07-26 11:17:30.959252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-07-26 11:17:30.959266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-07-26 11:17:30.959649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-07-26 11:17:30.959663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-07-26 11:17:30.960135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-07-26 11:17:30.960149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-07-26 11:17:30.960594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-07-26 11:17:30.960607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-07-26 11:17:30.960985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-07-26 11:17:30.960998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-07-26 11:17:30.961478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-07-26 11:17:30.961494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-07-26 11:17:30.961938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-07-26 11:17:30.961951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-07-26 11:17:30.962401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-07-26 11:17:30.962416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-07-26 11:17:30.962795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-07-26 11:17:30.962808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-07-26 11:17:30.963253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-07-26 11:17:30.963268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-07-26 11:17:30.963702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-07-26 11:17:30.963716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-07-26 11:17:30.964176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-07-26 11:17:30.964189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-07-26 11:17:30.964637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-07-26 11:17:30.964650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-07-26 11:17:30.965115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-07-26 11:17:30.965131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-07-26 11:17:30.965577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-07-26 11:17:30.965591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-07-26 11:17:30.966054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-07-26 11:17:30.966072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.708 [2024-07-26 11:17:30.966459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.708 [2024-07-26 11:17:30.966473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.708 qpair failed and we were unable to recover it. 00:29:11.709 [2024-07-26 11:17:30.966925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-07-26 11:17:30.966939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-07-26 11:17:30.967441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-07-26 11:17:30.967455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-07-26 11:17:30.967902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-07-26 11:17:30.967916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-07-26 11:17:30.968304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-07-26 11:17:30.968318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-07-26 11:17:30.968763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-07-26 11:17:30.968776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-07-26 11:17:30.969212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-07-26 11:17:30.969226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-07-26 11:17:30.969699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-07-26 11:17:30.969713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-07-26 11:17:30.970108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-07-26 11:17:30.970122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-07-26 11:17:30.970513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-07-26 11:17:30.970528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-07-26 11:17:30.970926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-07-26 11:17:30.970940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-07-26 11:17:30.971621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-07-26 11:17:30.971636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-07-26 11:17:30.972171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-07-26 11:17:30.972186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-07-26 11:17:30.972651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-07-26 11:17:30.972665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-07-26 11:17:30.973069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-07-26 11:17:30.973083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-07-26 11:17:30.973539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-07-26 11:17:30.973552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-07-26 11:17:30.973937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-07-26 11:17:30.973950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-07-26 11:17:30.974427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-07-26 11:17:30.974444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-07-26 11:17:30.974839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-07-26 11:17:30.974852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-07-26 11:17:30.975245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-07-26 11:17:30.975260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-07-26 11:17:30.975693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-07-26 11:17:30.975707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-07-26 11:17:30.976214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-07-26 11:17:30.976229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-07-26 11:17:30.976710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-07-26 11:17:30.976724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-07-26 11:17:30.977115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-07-26 11:17:30.977129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-07-26 11:17:30.977578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-07-26 11:17:30.977592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-07-26 11:17:30.978028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-07-26 11:17:30.978049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-07-26 11:17:30.978571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-07-26 11:17:30.978585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-07-26 11:17:30.978986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-07-26 11:17:30.979000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-07-26 11:17:30.979157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-07-26 11:17:30.979171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-07-26 11:17:30.979676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-07-26 11:17:30.979690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.709 qpair failed and we were unable to recover it. 00:29:11.709 [2024-07-26 11:17:30.980092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.709 [2024-07-26 11:17:30.980108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-07-26 11:17:30.980506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-07-26 11:17:30.980519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-07-26 11:17:30.981049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-07-26 11:17:30.981064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-07-26 11:17:30.981440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-07-26 11:17:30.981454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-07-26 11:17:30.981855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-07-26 11:17:30.981869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-07-26 11:17:30.982322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-07-26 11:17:30.982337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-07-26 11:17:30.982732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-07-26 11:17:30.982746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-07-26 11:17:30.983241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-07-26 11:17:30.983255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-07-26 11:17:30.983431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-07-26 11:17:30.983444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-07-26 11:17:30.983893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-07-26 11:17:30.983907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-07-26 11:17:30.984292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-07-26 11:17:30.984306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-07-26 11:17:30.985058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-07-26 11:17:30.985073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-07-26 11:17:30.985524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-07-26 11:17:30.985538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-07-26 11:17:30.986017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-07-26 11:17:30.986031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-07-26 11:17:30.986433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-07-26 11:17:30.986447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-07-26 11:17:30.986843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-07-26 11:17:30.986857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-07-26 11:17:30.987353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-07-26 11:17:30.987368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-07-26 11:17:30.987737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-07-26 11:17:30.987751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-07-26 11:17:30.988150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-07-26 11:17:30.988166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-07-26 11:17:30.988668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-07-26 11:17:30.988682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-07-26 11:17:30.988941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-07-26 11:17:30.988955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-07-26 11:17:30.989345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-07-26 11:17:30.989360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-07-26 11:17:30.989803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-07-26 11:17:30.989817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-07-26 11:17:30.990204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-07-26 11:17:30.990219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.710 [2024-07-26 11:17:30.990589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.710 [2024-07-26 11:17:30.990603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.710 qpair failed and we were unable to recover it. 00:29:11.711 [2024-07-26 11:17:30.991058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-07-26 11:17:30.991072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-07-26 11:17:30.991538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-07-26 11:17:30.991552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-07-26 11:17:30.992009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-07-26 11:17:30.992025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-07-26 11:17:30.992496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-07-26 11:17:30.992511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-07-26 11:17:30.992897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-07-26 11:17:30.992910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-07-26 11:17:30.993302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-07-26 11:17:30.993316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-07-26 11:17:30.993835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-07-26 11:17:30.993848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-07-26 11:17:30.994236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-07-26 11:17:30.994250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-07-26 11:17:30.994520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-07-26 11:17:30.994534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-07-26 11:17:30.994987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-07-26 11:17:30.995001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-07-26 11:17:30.995501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-07-26 11:17:30.995515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-07-26 11:17:30.995960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-07-26 11:17:30.995974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-07-26 11:17:30.996434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-07-26 11:17:30.996449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-07-26 11:17:30.996895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-07-26 11:17:30.996908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-07-26 11:17:30.997359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-07-26 11:17:30.997373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-07-26 11:17:30.997751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-07-26 11:17:30.997764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-07-26 11:17:30.998281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-07-26 11:17:30.998296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-07-26 11:17:30.998683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-07-26 11:17:30.998696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-07-26 11:17:30.998939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-07-26 11:17:30.998953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-07-26 11:17:30.999354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-07-26 11:17:30.999369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-07-26 11:17:30.999749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-07-26 11:17:30.999763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-07-26 11:17:31.000172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-07-26 11:17:31.000188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-07-26 11:17:31.000642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-07-26 11:17:31.000656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-07-26 11:17:31.001055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-07-26 11:17:31.001069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-07-26 11:17:31.001581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-07-26 11:17:31.001595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-07-26 11:17:31.002064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-07-26 11:17:31.002079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-07-26 11:17:31.002474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-07-26 11:17:31.002488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-07-26 11:17:31.002925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-07-26 11:17:31.002938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-07-26 11:17:31.003386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-07-26 11:17:31.003400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-07-26 11:17:31.003797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-07-26 11:17:31.003811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-07-26 11:17:31.004269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-07-26 11:17:31.004284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-07-26 11:17:31.004735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-07-26 11:17:31.004749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-07-26 11:17:31.005224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-07-26 11:17:31.005239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.711 qpair failed and we were unable to recover it. 00:29:11.711 [2024-07-26 11:17:31.005595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.711 [2024-07-26 11:17:31.005609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-07-26 11:17:31.006008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-07-26 11:17:31.006022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-07-26 11:17:31.006462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-07-26 11:17:31.006476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-07-26 11:17:31.006916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-07-26 11:17:31.006930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-07-26 11:17:31.007343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-07-26 11:17:31.007358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-07-26 11:17:31.007833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-07-26 11:17:31.007847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-07-26 11:17:31.008348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-07-26 11:17:31.008363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-07-26 11:17:31.008749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-07-26 11:17:31.008763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-07-26 11:17:31.009225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-07-26 11:17:31.009239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-07-26 11:17:31.009667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-07-26 11:17:31.009683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-07-26 11:17:31.010174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-07-26 11:17:31.010188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-07-26 11:17:31.010643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-07-26 11:17:31.010657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-07-26 11:17:31.010825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-07-26 11:17:31.010838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-07-26 11:17:31.011236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-07-26 11:17:31.011250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-07-26 11:17:31.011640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-07-26 11:17:31.011653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-07-26 11:17:31.012052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-07-26 11:17:31.012070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-07-26 11:17:31.012520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-07-26 11:17:31.012534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-07-26 11:17:31.012995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-07-26 11:17:31.013009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-07-26 11:17:31.013392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-07-26 11:17:31.013405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-07-26 11:17:31.013876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-07-26 11:17:31.013890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-07-26 11:17:31.014283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-07-26 11:17:31.014297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-07-26 11:17:31.014689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-07-26 11:17:31.014703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-07-26 11:17:31.015094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-07-26 11:17:31.015108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-07-26 11:17:31.015503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-07-26 11:17:31.015517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-07-26 11:17:31.015907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-07-26 11:17:31.015921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-07-26 11:17:31.016384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-07-26 11:17:31.016399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-07-26 11:17:31.016788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-07-26 11:17:31.016802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-07-26 11:17:31.017452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-07-26 11:17:31.017466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-07-26 11:17:31.018065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-07-26 11:17:31.018080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-07-26 11:17:31.018647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-07-26 11:17:31.018661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.712 [2024-07-26 11:17:31.019113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.712 [2024-07-26 11:17:31.019128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.712 qpair failed and we were unable to recover it. 00:29:11.713 [2024-07-26 11:17:31.019536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-07-26 11:17:31.019550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-07-26 11:17:31.020000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-07-26 11:17:31.020014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-07-26 11:17:31.020408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-07-26 11:17:31.020424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-07-26 11:17:31.020827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-07-26 11:17:31.020840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-07-26 11:17:31.021283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-07-26 11:17:31.021297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-07-26 11:17:31.021751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-07-26 11:17:31.021766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-07-26 11:17:31.022218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-07-26 11:17:31.022232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-07-26 11:17:31.022625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-07-26 11:17:31.022639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-07-26 11:17:31.023084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-07-26 11:17:31.023098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-07-26 11:17:31.023539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-07-26 11:17:31.023552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-07-26 11:17:31.024007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-07-26 11:17:31.024021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-07-26 11:17:31.024477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-07-26 11:17:31.024492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-07-26 11:17:31.024923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-07-26 11:17:31.024937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-07-26 11:17:31.025389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-07-26 11:17:31.025405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-07-26 11:17:31.025853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-07-26 11:17:31.025867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-07-26 11:17:31.026320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-07-26 11:17:31.026334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-07-26 11:17:31.026717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-07-26 11:17:31.026731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-07-26 11:17:31.027103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-07-26 11:17:31.027117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-07-26 11:17:31.027642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-07-26 11:17:31.027660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-07-26 11:17:31.028337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-07-26 11:17:31.028352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-07-26 11:17:31.028734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-07-26 11:17:31.028749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-07-26 11:17:31.029131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-07-26 11:17:31.029146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-07-26 11:17:31.029608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-07-26 11:17:31.029622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-07-26 11:17:31.029878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-07-26 11:17:31.029891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-07-26 11:17:31.030280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-07-26 11:17:31.030293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-07-26 11:17:31.030738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-07-26 11:17:31.030751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-07-26 11:17:31.031205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-07-26 11:17:31.031220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-07-26 11:17:31.031694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-07-26 11:17:31.031708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.713 [2024-07-26 11:17:31.032110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.713 [2024-07-26 11:17:31.032125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.713 qpair failed and we were unable to recover it. 00:29:11.714 [2024-07-26 11:17:31.032567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-07-26 11:17:31.032581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-07-26 11:17:31.033031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-07-26 11:17:31.033058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-07-26 11:17:31.033220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-07-26 11:17:31.033234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-07-26 11:17:31.033614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-07-26 11:17:31.033628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-07-26 11:17:31.033896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-07-26 11:17:31.033910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-07-26 11:17:31.034413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-07-26 11:17:31.034427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-07-26 11:17:31.034900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-07-26 11:17:31.034913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-07-26 11:17:31.035324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-07-26 11:17:31.035337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-07-26 11:17:31.035837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-07-26 11:17:31.035851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-07-26 11:17:31.036238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-07-26 11:17:31.036254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-07-26 11:17:31.036699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-07-26 11:17:31.036713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-07-26 11:17:31.037086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-07-26 11:17:31.037101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-07-26 11:17:31.037553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-07-26 11:17:31.037567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-07-26 11:17:31.037939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-07-26 11:17:31.037953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-07-26 11:17:31.038392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-07-26 11:17:31.038407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-07-26 11:17:31.038863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-07-26 11:17:31.038877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-07-26 11:17:31.039329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-07-26 11:17:31.039344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-07-26 11:17:31.039735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-07-26 11:17:31.039749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-07-26 11:17:31.040141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-07-26 11:17:31.040156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-07-26 11:17:31.040871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-07-26 11:17:31.040886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-07-26 11:17:31.041282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-07-26 11:17:31.041297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-07-26 11:17:31.041686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-07-26 11:17:31.041700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-07-26 11:17:31.042149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-07-26 11:17:31.042164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-07-26 11:17:31.042745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-07-26 11:17:31.042758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-07-26 11:17:31.043208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-07-26 11:17:31.043222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-07-26 11:17:31.043660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-07-26 11:17:31.043673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-07-26 11:17:31.044175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-07-26 11:17:31.044191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-07-26 11:17:31.044581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-07-26 11:17:31.044594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-07-26 11:17:31.045033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-07-26 11:17:31.045051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-07-26 11:17:31.045461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.714 [2024-07-26 11:17:31.045479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.714 qpair failed and we were unable to recover it. 00:29:11.714 [2024-07-26 11:17:31.045927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-07-26 11:17:31.045941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-07-26 11:17:31.046443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-07-26 11:17:31.046457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-07-26 11:17:31.046896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-07-26 11:17:31.046910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-07-26 11:17:31.047316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-07-26 11:17:31.047331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-07-26 11:17:31.047768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-07-26 11:17:31.047782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-07-26 11:17:31.048283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-07-26 11:17:31.048298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-07-26 11:17:31.048684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-07-26 11:17:31.048698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-07-26 11:17:31.049154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-07-26 11:17:31.049168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-07-26 11:17:31.049617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-07-26 11:17:31.049630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-07-26 11:17:31.050082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-07-26 11:17:31.050096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-07-26 11:17:31.050496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-07-26 11:17:31.050509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-07-26 11:17:31.050905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-07-26 11:17:31.050919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-07-26 11:17:31.051196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-07-26 11:17:31.051210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-07-26 11:17:31.051722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-07-26 11:17:31.051737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-07-26 11:17:31.052125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-07-26 11:17:31.052140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-07-26 11:17:31.052652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-07-26 11:17:31.052666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-07-26 11:17:31.053183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-07-26 11:17:31.053197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-07-26 11:17:31.053681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-07-26 11:17:31.053694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-07-26 11:17:31.054090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-07-26 11:17:31.054104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-07-26 11:17:31.054495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-07-26 11:17:31.054509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-07-26 11:17:31.054947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-07-26 11:17:31.054960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-07-26 11:17:31.055363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-07-26 11:17:31.055378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-07-26 11:17:31.055818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-07-26 11:17:31.055831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.715 qpair failed and we were unable to recover it. 00:29:11.715 [2024-07-26 11:17:31.056217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.715 [2024-07-26 11:17:31.056232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-07-26 11:17:31.056620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-07-26 11:17:31.056634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-07-26 11:17:31.057009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-07-26 11:17:31.057024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-07-26 11:17:31.057441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-07-26 11:17:31.057455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-07-26 11:17:31.057855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-07-26 11:17:31.057869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-07-26 11:17:31.058111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-07-26 11:17:31.058125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-07-26 11:17:31.058575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-07-26 11:17:31.058589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-07-26 11:17:31.059055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-07-26 11:17:31.059069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-07-26 11:17:31.059527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-07-26 11:17:31.059540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-07-26 11:17:31.059985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-07-26 11:17:31.059999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-07-26 11:17:31.060387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-07-26 11:17:31.060402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-07-26 11:17:31.060854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-07-26 11:17:31.060868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-07-26 11:17:31.061271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-07-26 11:17:31.061285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-07-26 11:17:31.061790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-07-26 11:17:31.061803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-07-26 11:17:31.062184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-07-26 11:17:31.062198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-07-26 11:17:31.062701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-07-26 11:17:31.062715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-07-26 11:17:31.063113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-07-26 11:17:31.063130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-07-26 11:17:31.063507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-07-26 11:17:31.063520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-07-26 11:17:31.063972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-07-26 11:17:31.063985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-07-26 11:17:31.064425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-07-26 11:17:31.064440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-07-26 11:17:31.064885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-07-26 11:17:31.064899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-07-26 11:17:31.065086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-07-26 11:17:31.065100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-07-26 11:17:31.065583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-07-26 11:17:31.065598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-07-26 11:17:31.066055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-07-26 11:17:31.066068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-07-26 11:17:31.066445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-07-26 11:17:31.066459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-07-26 11:17:31.066970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-07-26 11:17:31.066985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-07-26 11:17:31.067248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-07-26 11:17:31.067262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-07-26 11:17:31.067744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-07-26 11:17:31.067758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-07-26 11:17:31.068154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-07-26 11:17:31.068170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.716 qpair failed and we were unable to recover it. 00:29:11.716 [2024-07-26 11:17:31.068552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.716 [2024-07-26 11:17:31.068566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-07-26 11:17:31.069006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-07-26 11:17:31.069019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-07-26 11:17:31.069465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-07-26 11:17:31.069479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-07-26 11:17:31.069867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-07-26 11:17:31.069881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-07-26 11:17:31.070281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-07-26 11:17:31.070296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-07-26 11:17:31.070651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-07-26 11:17:31.070665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-07-26 11:17:31.071051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-07-26 11:17:31.071065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-07-26 11:17:31.071508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-07-26 11:17:31.071522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-07-26 11:17:31.071913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-07-26 11:17:31.071928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-07-26 11:17:31.072391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-07-26 11:17:31.072406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-07-26 11:17:31.072789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-07-26 11:17:31.072803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-07-26 11:17:31.073262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-07-26 11:17:31.073276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-07-26 11:17:31.073677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-07-26 11:17:31.073691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-07-26 11:17:31.074075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-07-26 11:17:31.074089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-07-26 11:17:31.074480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-07-26 11:17:31.074494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-07-26 11:17:31.074871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-07-26 11:17:31.074885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-07-26 11:17:31.075331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-07-26 11:17:31.075345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-07-26 11:17:31.075778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-07-26 11:17:31.075792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-07-26 11:17:31.076179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-07-26 11:17:31.076194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-07-26 11:17:31.076583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-07-26 11:17:31.076597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-07-26 11:17:31.077040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-07-26 11:17:31.077062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-07-26 11:17:31.077459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-07-26 11:17:31.077473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-07-26 11:17:31.077874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-07-26 11:17:31.077888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-07-26 11:17:31.078361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-07-26 11:17:31.078375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-07-26 11:17:31.078829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-07-26 11:17:31.078843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-07-26 11:17:31.079231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-07-26 11:17:31.079245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-07-26 11:17:31.079688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-07-26 11:17:31.079702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-07-26 11:17:31.080086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-07-26 11:17:31.080102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-07-26 11:17:31.080507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-07-26 11:17:31.080520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-07-26 11:17:31.080914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-07-26 11:17:31.080928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.717 [2024-07-26 11:17:31.081341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.717 [2024-07-26 11:17:31.081356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.717 qpair failed and we were unable to recover it. 00:29:11.718 [2024-07-26 11:17:31.081734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-07-26 11:17:31.081748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-07-26 11:17:31.082130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-07-26 11:17:31.082144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-07-26 11:17:31.082620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-07-26 11:17:31.082633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-07-26 11:17:31.083011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-07-26 11:17:31.083024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-07-26 11:17:31.083554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-07-26 11:17:31.083568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-07-26 11:17:31.083958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-07-26 11:17:31.083971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-07-26 11:17:31.084415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-07-26 11:17:31.084430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-07-26 11:17:31.084880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-07-26 11:17:31.084893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-07-26 11:17:31.085344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-07-26 11:17:31.085358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-07-26 11:17:31.086120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-07-26 11:17:31.086135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-07-26 11:17:31.086664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-07-26 11:17:31.086678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-07-26 11:17:31.087124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-07-26 11:17:31.087139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-07-26 11:17:31.087593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-07-26 11:17:31.087607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-07-26 11:17:31.087984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-07-26 11:17:31.087997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-07-26 11:17:31.088500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-07-26 11:17:31.088516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-07-26 11:17:31.088973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-07-26 11:17:31.088989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-07-26 11:17:31.089385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-07-26 11:17:31.089400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-07-26 11:17:31.089870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-07-26 11:17:31.089884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-07-26 11:17:31.090353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-07-26 11:17:31.090368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-07-26 11:17:31.090826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-07-26 11:17:31.090840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-07-26 11:17:31.091247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-07-26 11:17:31.091261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-07-26 11:17:31.091765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-07-26 11:17:31.091779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-07-26 11:17:31.092074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-07-26 11:17:31.092089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-07-26 11:17:31.092479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-07-26 11:17:31.092496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-07-26 11:17:31.092890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-07-26 11:17:31.092904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-07-26 11:17:31.093289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-07-26 11:17:31.093303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-07-26 11:17:31.093754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-07-26 11:17:31.093768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.718 [2024-07-26 11:17:31.094272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.718 [2024-07-26 11:17:31.094286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.718 qpair failed and we were unable to recover it. 00:29:11.719 [2024-07-26 11:17:31.094789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-07-26 11:17:31.094803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-07-26 11:17:31.095301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-07-26 11:17:31.095316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-07-26 11:17:31.095713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-07-26 11:17:31.095727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-07-26 11:17:31.096136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-07-26 11:17:31.096151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-07-26 11:17:31.096579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-07-26 11:17:31.096594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-07-26 11:17:31.097118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-07-26 11:17:31.097132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-07-26 11:17:31.097527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-07-26 11:17:31.097541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-07-26 11:17:31.098124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-07-26 11:17:31.098139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-07-26 11:17:31.098372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-07-26 11:17:31.098387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-07-26 11:17:31.099069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-07-26 11:17:31.099083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-07-26 11:17:31.099393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-07-26 11:17:31.099408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-07-26 11:17:31.099796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-07-26 11:17:31.099809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-07-26 11:17:31.100210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-07-26 11:17:31.100226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-07-26 11:17:31.100616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-07-26 11:17:31.100629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-07-26 11:17:31.101353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-07-26 11:17:31.101368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-07-26 11:17:31.101823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-07-26 11:17:31.101837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-07-26 11:17:31.102287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-07-26 11:17:31.102302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-07-26 11:17:31.102684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-07-26 11:17:31.102698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-07-26 11:17:31.102866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-07-26 11:17:31.102880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-07-26 11:17:31.103269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-07-26 11:17:31.103283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-07-26 11:17:31.103724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-07-26 11:17:31.103738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-07-26 11:17:31.104200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-07-26 11:17:31.104215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-07-26 11:17:31.104666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-07-26 11:17:31.104680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-07-26 11:17:31.105183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-07-26 11:17:31.105198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-07-26 11:17:31.105576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-07-26 11:17:31.105590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-07-26 11:17:31.106036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-07-26 11:17:31.106056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-07-26 11:17:31.106524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-07-26 11:17:31.106538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-07-26 11:17:31.106936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-07-26 11:17:31.106950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-07-26 11:17:31.107435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-07-26 11:17:31.107450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-07-26 11:17:31.107821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-07-26 11:17:31.107835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.719 [2024-07-26 11:17:31.108307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.719 [2024-07-26 11:17:31.108322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.719 qpair failed and we were unable to recover it. 00:29:11.720 [2024-07-26 11:17:31.108715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-07-26 11:17:31.108729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-07-26 11:17:31.109126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-07-26 11:17:31.109141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-07-26 11:17:31.109538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-07-26 11:17:31.109553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-07-26 11:17:31.109935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-07-26 11:17:31.109949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-07-26 11:17:31.110396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-07-26 11:17:31.110415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-07-26 11:17:31.110789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-07-26 11:17:31.110802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-07-26 11:17:31.111363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-07-26 11:17:31.111378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-07-26 11:17:31.111825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-07-26 11:17:31.111838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-07-26 11:17:31.112241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-07-26 11:17:31.112256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-07-26 11:17:31.112661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-07-26 11:17:31.112675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-07-26 11:17:31.113063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-07-26 11:17:31.113077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-07-26 11:17:31.113462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-07-26 11:17:31.113476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-07-26 11:17:31.113861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-07-26 11:17:31.113875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-07-26 11:17:31.114271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-07-26 11:17:31.114285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-07-26 11:17:31.114635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-07-26 11:17:31.114648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-07-26 11:17:31.115088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-07-26 11:17:31.115102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-07-26 11:17:31.115536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-07-26 11:17:31.115550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-07-26 11:17:31.115993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-07-26 11:17:31.116007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-07-26 11:17:31.116462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-07-26 11:17:31.116478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-07-26 11:17:31.116919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-07-26 11:17:31.116934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-07-26 11:17:31.117318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-07-26 11:17:31.117332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-07-26 11:17:31.117764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-07-26 11:17:31.117778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-07-26 11:17:31.118234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-07-26 11:17:31.118249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-07-26 11:17:31.118771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-07-26 11:17:31.118785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-07-26 11:17:31.119243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-07-26 11:17:31.119257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-07-26 11:17:31.119781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-07-26 11:17:31.119795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-07-26 11:17:31.120059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-07-26 11:17:31.120075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-07-26 11:17:31.120453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-07-26 11:17:31.120467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-07-26 11:17:31.120864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-07-26 11:17:31.120878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-07-26 11:17:31.121353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.720 [2024-07-26 11:17:31.121367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.720 qpair failed and we were unable to recover it. 00:29:11.720 [2024-07-26 11:17:31.121757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-07-26 11:17:31.121771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-07-26 11:17:31.122265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-07-26 11:17:31.122279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-07-26 11:17:31.122499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-07-26 11:17:31.122513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-07-26 11:17:31.122892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-07-26 11:17:31.122906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-07-26 11:17:31.123281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-07-26 11:17:31.123296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-07-26 11:17:31.123749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-07-26 11:17:31.123763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-07-26 11:17:31.124208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-07-26 11:17:31.124223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-07-26 11:17:31.124604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-07-26 11:17:31.124618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-07-26 11:17:31.125079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-07-26 11:17:31.125094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-07-26 11:17:31.125268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-07-26 11:17:31.125282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-07-26 11:17:31.125670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-07-26 11:17:31.125685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-07-26 11:17:31.126071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-07-26 11:17:31.126085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-07-26 11:17:31.126536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-07-26 11:17:31.126551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-07-26 11:17:31.126991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-07-26 11:17:31.127005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-07-26 11:17:31.127397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-07-26 11:17:31.127414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-07-26 11:17:31.127679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-07-26 11:17:31.127693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-07-26 11:17:31.128162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-07-26 11:17:31.128178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-07-26 11:17:31.128526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-07-26 11:17:31.128540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-07-26 11:17:31.128996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-07-26 11:17:31.129010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-07-26 11:17:31.129398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-07-26 11:17:31.129413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-07-26 11:17:31.129924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-07-26 11:17:31.129938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-07-26 11:17:31.130376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-07-26 11:17:31.130391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.721 [2024-07-26 11:17:31.130923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.721 [2024-07-26 11:17:31.130936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.721 qpair failed and we were unable to recover it. 00:29:11.722 [2024-07-26 11:17:31.131322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-07-26 11:17:31.131337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-07-26 11:17:31.131715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-07-26 11:17:31.131729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-07-26 11:17:31.132109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-07-26 11:17:31.132125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-07-26 11:17:31.132580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-07-26 11:17:31.132594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-07-26 11:17:31.132990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-07-26 11:17:31.133004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-07-26 11:17:31.133465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-07-26 11:17:31.133487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-07-26 11:17:31.133991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-07-26 11:17:31.134005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-07-26 11:17:31.134404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-07-26 11:17:31.134418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-07-26 11:17:31.134821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-07-26 11:17:31.134836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-07-26 11:17:31.135338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-07-26 11:17:31.135352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-07-26 11:17:31.135807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-07-26 11:17:31.135822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-07-26 11:17:31.136287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-07-26 11:17:31.136307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-07-26 11:17:31.136785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-07-26 11:17:31.136799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-07-26 11:17:31.137193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-07-26 11:17:31.137207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-07-26 11:17:31.137590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-07-26 11:17:31.137604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-07-26 11:17:31.137981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-07-26 11:17:31.137995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-07-26 11:17:31.138425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-07-26 11:17:31.138439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-07-26 11:17:31.138896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-07-26 11:17:31.138911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-07-26 11:17:31.139415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-07-26 11:17:31.139430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-07-26 11:17:31.139654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-07-26 11:17:31.139668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-07-26 11:17:31.140200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-07-26 11:17:31.140216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-07-26 11:17:31.140685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-07-26 11:17:31.140714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-07-26 11:17:31.141165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-07-26 11:17:31.141180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-07-26 11:17:31.141629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-07-26 11:17:31.141644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-07-26 11:17:31.142094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-07-26 11:17:31.142109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-07-26 11:17:31.142583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-07-26 11:17:31.142597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-07-26 11:17:31.142970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-07-26 11:17:31.142985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-07-26 11:17:31.143616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-07-26 11:17:31.143630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-07-26 11:17:31.144136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.722 [2024-07-26 11:17:31.144151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.722 qpair failed and we were unable to recover it. 00:29:11.722 [2024-07-26 11:17:31.144532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-07-26 11:17:31.144547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-07-26 11:17:31.144993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-07-26 11:17:31.145007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-07-26 11:17:31.145471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-07-26 11:17:31.145496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-07-26 11:17:31.145901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-07-26 11:17:31.145922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-07-26 11:17:31.146393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-07-26 11:17:31.146409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-07-26 11:17:31.146883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-07-26 11:17:31.146896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-07-26 11:17:31.147296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-07-26 11:17:31.147310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-07-26 11:17:31.147771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-07-26 11:17:31.147785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-07-26 11:17:31.148231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-07-26 11:17:31.148246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-07-26 11:17:31.148627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-07-26 11:17:31.148641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-07-26 11:17:31.149085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-07-26 11:17:31.149100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-07-26 11:17:31.149553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-07-26 11:17:31.149566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-07-26 11:17:31.150013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-07-26 11:17:31.150026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-07-26 11:17:31.150427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-07-26 11:17:31.150442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-07-26 11:17:31.150823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-07-26 11:17:31.150837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-07-26 11:17:31.151273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-07-26 11:17:31.151287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-07-26 11:17:31.151729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-07-26 11:17:31.151744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-07-26 11:17:31.152127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-07-26 11:17:31.152142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-07-26 11:17:31.152517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-07-26 11:17:31.152531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-07-26 11:17:31.153002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-07-26 11:17:31.153016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-07-26 11:17:31.153477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-07-26 11:17:31.153491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-07-26 11:17:31.153872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-07-26 11:17:31.153885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-07-26 11:17:31.154329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-07-26 11:17:31.154343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-07-26 11:17:31.154849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-07-26 11:17:31.154863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-07-26 11:17:31.155307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-07-26 11:17:31.155322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-07-26 11:17:31.155771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-07-26 11:17:31.155785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-07-26 11:17:31.156214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-07-26 11:17:31.156229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-07-26 11:17:31.156615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-07-26 11:17:31.156630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.723 [2024-07-26 11:17:31.157092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.723 [2024-07-26 11:17:31.157107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.723 qpair failed and we were unable to recover it. 00:29:11.724 [2024-07-26 11:17:31.157610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-07-26 11:17:31.157624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-07-26 11:17:31.158082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-07-26 11:17:31.158096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-07-26 11:17:31.158479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-07-26 11:17:31.158493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-07-26 11:17:31.158876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-07-26 11:17:31.158890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-07-26 11:17:31.159329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-07-26 11:17:31.159344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-07-26 11:17:31.159872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-07-26 11:17:31.159887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-07-26 11:17:31.160284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-07-26 11:17:31.160299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-07-26 11:17:31.160556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-07-26 11:17:31.160570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-07-26 11:17:31.160946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-07-26 11:17:31.160960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-07-26 11:17:31.161403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-07-26 11:17:31.161417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-07-26 11:17:31.161807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-07-26 11:17:31.161821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-07-26 11:17:31.162221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-07-26 11:17:31.162235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-07-26 11:17:31.162704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-07-26 11:17:31.162717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-07-26 11:17:31.163094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-07-26 11:17:31.163112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-07-26 11:17:31.163555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-07-26 11:17:31.163569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-07-26 11:17:31.163942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-07-26 11:17:31.163956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-07-26 11:17:31.164336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-07-26 11:17:31.164350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-07-26 11:17:31.164853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-07-26 11:17:31.164867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-07-26 11:17:31.165315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-07-26 11:17:31.165329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-07-26 11:17:31.165761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-07-26 11:17:31.165775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-07-26 11:17:31.166169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-07-26 11:17:31.166183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-07-26 11:17:31.166651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-07-26 11:17:31.166664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-07-26 11:17:31.166949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-07-26 11:17:31.166963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-07-26 11:17:31.167351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-07-26 11:17:31.167366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-07-26 11:17:31.167873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-07-26 11:17:31.167887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-07-26 11:17:31.168278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-07-26 11:17:31.168293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-07-26 11:17:31.168666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-07-26 11:17:31.168680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-07-26 11:17:31.169070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-07-26 11:17:31.169085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-07-26 11:17:31.169548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-07-26 11:17:31.169562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-07-26 11:17:31.169957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.724 [2024-07-26 11:17:31.169971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.724 qpair failed and we were unable to recover it. 00:29:11.724 [2024-07-26 11:17:31.170406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-07-26 11:17:31.170421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-07-26 11:17:31.170800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-07-26 11:17:31.170814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-07-26 11:17:31.171257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-07-26 11:17:31.171271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-07-26 11:17:31.171959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-07-26 11:17:31.171972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-07-26 11:17:31.172364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-07-26 11:17:31.172378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-07-26 11:17:31.172766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-07-26 11:17:31.172780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-07-26 11:17:31.173220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-07-26 11:17:31.173235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-07-26 11:17:31.173669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-07-26 11:17:31.173683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-07-26 11:17:31.174140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-07-26 11:17:31.174154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-07-26 11:17:31.174330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-07-26 11:17:31.174345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-07-26 11:17:31.174847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-07-26 11:17:31.174861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-07-26 11:17:31.175326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-07-26 11:17:31.175341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-07-26 11:17:31.175796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-07-26 11:17:31.175810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-07-26 11:17:31.176187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-07-26 11:17:31.176202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-07-26 11:17:31.176683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-07-26 11:17:31.176697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-07-26 11:17:31.177093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-07-26 11:17:31.177107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-07-26 11:17:31.177493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-07-26 11:17:31.177507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-07-26 11:17:31.178084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-07-26 11:17:31.178099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-07-26 11:17:31.178549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-07-26 11:17:31.178563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-07-26 11:17:31.178943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-07-26 11:17:31.178956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-07-26 11:17:31.179652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-07-26 11:17:31.179666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-07-26 11:17:31.179942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-07-26 11:17:31.179957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-07-26 11:17:31.180405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-07-26 11:17:31.180420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-07-26 11:17:31.180876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-07-26 11:17:31.180893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-07-26 11:17:31.181334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-07-26 11:17:31.181348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-07-26 11:17:31.181852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-07-26 11:17:31.181868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-07-26 11:17:31.182266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-07-26 11:17:31.182280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-07-26 11:17:31.182521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-07-26 11:17:31.182536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-07-26 11:17:31.182988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.725 [2024-07-26 11:17:31.183003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.725 qpair failed and we were unable to recover it. 00:29:11.725 [2024-07-26 11:17:31.183385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-07-26 11:17:31.183400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-07-26 11:17:31.183917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-07-26 11:17:31.183931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-07-26 11:17:31.184320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-07-26 11:17:31.184335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-07-26 11:17:31.184725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-07-26 11:17:31.184739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-07-26 11:17:31.185188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-07-26 11:17:31.185202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-07-26 11:17:31.185586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-07-26 11:17:31.185600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-07-26 11:17:31.185982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-07-26 11:17:31.185995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-07-26 11:17:31.186423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-07-26 11:17:31.186439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-07-26 11:17:31.186842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-07-26 11:17:31.186856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-07-26 11:17:31.187228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-07-26 11:17:31.187242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-07-26 11:17:31.187628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-07-26 11:17:31.187642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-07-26 11:17:31.188153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-07-26 11:17:31.188168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-07-26 11:17:31.188608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-07-26 11:17:31.188622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-07-26 11:17:31.189004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-07-26 11:17:31.189018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-07-26 11:17:31.189412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-07-26 11:17:31.189426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-07-26 11:17:31.189870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-07-26 11:17:31.189885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.726 [2024-07-26 11:17:31.190338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.726 [2024-07-26 11:17:31.190352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.726 qpair failed and we were unable to recover it. 00:29:11.995 [2024-07-26 11:17:31.190735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.995 [2024-07-26 11:17:31.190750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.995 qpair failed and we were unable to recover it. 00:29:11.995 [2024-07-26 11:17:31.191128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.995 [2024-07-26 11:17:31.191143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.995 qpair failed and we were unable to recover it. 00:29:11.995 [2024-07-26 11:17:31.191546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.996 [2024-07-26 11:17:31.191559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.996 qpair failed and we were unable to recover it. 00:29:11.996 [2024-07-26 11:17:31.191955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.996 [2024-07-26 11:17:31.191969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.996 qpair failed and we were unable to recover it. 00:29:11.996 [2024-07-26 11:17:31.192371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.996 [2024-07-26 11:17:31.192387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.996 qpair failed and we were unable to recover it. 00:29:11.996 [2024-07-26 11:17:31.192779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.996 [2024-07-26 11:17:31.192792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.996 qpair failed and we were unable to recover it. 00:29:11.996 [2024-07-26 11:17:31.193317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.996 [2024-07-26 11:17:31.193331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.996 qpair failed and we were unable to recover it. 00:29:11.996 [2024-07-26 11:17:31.193707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.996 [2024-07-26 11:17:31.193721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.996 qpair failed and we were unable to recover it. 00:29:11.996 [2024-07-26 11:17:31.194129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.996 [2024-07-26 11:17:31.194143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.996 qpair failed and we were unable to recover it. 00:29:11.996 [2024-07-26 11:17:31.194523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.996 [2024-07-26 11:17:31.194536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.996 qpair failed and we were unable to recover it. 00:29:11.996 [2024-07-26 11:17:31.194917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.996 [2024-07-26 11:17:31.194931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.996 qpair failed and we were unable to recover it. 00:29:11.996 [2024-07-26 11:17:31.195329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.996 [2024-07-26 11:17:31.195344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.996 qpair failed and we were unable to recover it. 00:29:11.996 [2024-07-26 11:17:31.195536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.996 [2024-07-26 11:17:31.195550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.996 qpair failed and we were unable to recover it. 00:29:11.996 [2024-07-26 11:17:31.195919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.996 [2024-07-26 11:17:31.195933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.996 qpair failed and we were unable to recover it. 00:29:11.996 [2024-07-26 11:17:31.196371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.996 [2024-07-26 11:17:31.196385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.996 qpair failed and we were unable to recover it. 00:29:11.996 [2024-07-26 11:17:31.196769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.996 [2024-07-26 11:17:31.196782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.996 qpair failed and we were unable to recover it. 00:29:11.996 [2024-07-26 11:17:31.197167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.996 [2024-07-26 11:17:31.197182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.996 qpair failed and we were unable to recover it. 00:29:11.996 [2024-07-26 11:17:31.197745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.996 [2024-07-26 11:17:31.197765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.996 qpair failed and we were unable to recover it. 00:29:11.996 [2024-07-26 11:17:31.198166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.996 [2024-07-26 11:17:31.198181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.996 qpair failed and we were unable to recover it. 00:29:11.996 [2024-07-26 11:17:31.198579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.996 [2024-07-26 11:17:31.198592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.996 qpair failed and we were unable to recover it. 00:29:11.996 [2024-07-26 11:17:31.198976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.996 [2024-07-26 11:17:31.198990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.996 qpair failed and we were unable to recover it. 00:29:11.996 [2024-07-26 11:17:31.199369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.996 [2024-07-26 11:17:31.199383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.996 qpair failed and we were unable to recover it. 00:29:11.996 [2024-07-26 11:17:31.199766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.996 [2024-07-26 11:17:31.199780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.996 qpair failed and we were unable to recover it. 00:29:11.996 [2024-07-26 11:17:31.200225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.996 [2024-07-26 11:17:31.200246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.996 qpair failed and we were unable to recover it. 00:29:11.996 [2024-07-26 11:17:31.200638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.996 [2024-07-26 11:17:31.200652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.996 qpair failed and we were unable to recover it. 00:29:11.996 [2024-07-26 11:17:31.201107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.996 [2024-07-26 11:17:31.201121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.996 qpair failed and we were unable to recover it. 00:29:11.996 [2024-07-26 11:17:31.201506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.996 [2024-07-26 11:17:31.201520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.996 qpair failed and we were unable to recover it. 00:29:11.996 [2024-07-26 11:17:31.201897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.996 [2024-07-26 11:17:31.201910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.996 qpair failed and we were unable to recover it. 00:29:11.996 [2024-07-26 11:17:31.202366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.996 [2024-07-26 11:17:31.202380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.996 qpair failed and we were unable to recover it. 00:29:11.996 [2024-07-26 11:17:31.202746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.996 [2024-07-26 11:17:31.202760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.996 qpair failed and we were unable to recover it. 00:29:11.996 [2024-07-26 11:17:31.203141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.996 [2024-07-26 11:17:31.203156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.996 qpair failed and we were unable to recover it. 00:29:11.996 [2024-07-26 11:17:31.203618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.996 [2024-07-26 11:17:31.203633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.996 qpair failed and we were unable to recover it. 00:29:11.996 [2024-07-26 11:17:31.204021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.996 [2024-07-26 11:17:31.204035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.996 qpair failed and we were unable to recover it. 00:29:11.996 [2024-07-26 11:17:31.204423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.996 [2024-07-26 11:17:31.204438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.996 qpair failed and we were unable to recover it. 00:29:11.996 [2024-07-26 11:17:31.204886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.996 [2024-07-26 11:17:31.204900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.996 qpair failed and we were unable to recover it. 00:29:11.996 [2024-07-26 11:17:31.205320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.996 [2024-07-26 11:17:31.205335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.996 qpair failed and we were unable to recover it. 00:29:11.996 [2024-07-26 11:17:31.205772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.997 [2024-07-26 11:17:31.205786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.997 qpair failed and we were unable to recover it. 00:29:11.997 [2024-07-26 11:17:31.206171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.997 [2024-07-26 11:17:31.206185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.997 qpair failed and we were unable to recover it. 00:29:11.997 [2024-07-26 11:17:31.206635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.997 [2024-07-26 11:17:31.206649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.997 qpair failed and we were unable to recover it. 00:29:11.997 [2024-07-26 11:17:31.207098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.997 [2024-07-26 11:17:31.207112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.997 qpair failed and we were unable to recover it. 00:29:11.997 [2024-07-26 11:17:31.207291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.997 [2024-07-26 11:17:31.207304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.997 qpair failed and we were unable to recover it. 00:29:11.997 [2024-07-26 11:17:31.207699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.997 [2024-07-26 11:17:31.207713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.997 qpair failed and we were unable to recover it. 00:29:11.997 [2024-07-26 11:17:31.208101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.997 [2024-07-26 11:17:31.208115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.997 qpair failed and we were unable to recover it. 00:29:11.997 [2024-07-26 11:17:31.208524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.997 [2024-07-26 11:17:31.208538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.997 qpair failed and we were unable to recover it. 00:29:11.997 [2024-07-26 11:17:31.208915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.997 [2024-07-26 11:17:31.208929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.997 qpair failed and we were unable to recover it. 00:29:11.997 [2024-07-26 11:17:31.209328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.997 [2024-07-26 11:17:31.209342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.997 qpair failed and we were unable to recover it. 00:29:11.997 [2024-07-26 11:17:31.209788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.997 [2024-07-26 11:17:31.209802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.997 qpair failed and we were unable to recover it. 00:29:11.997 [2024-07-26 11:17:31.210271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.997 [2024-07-26 11:17:31.210286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.997 qpair failed and we were unable to recover it. 00:29:11.997 [2024-07-26 11:17:31.210689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.997 [2024-07-26 11:17:31.210703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.997 qpair failed and we were unable to recover it. 00:29:11.997 [2024-07-26 11:17:31.211153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.997 [2024-07-26 11:17:31.211168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.997 qpair failed and we were unable to recover it. 00:29:11.997 [2024-07-26 11:17:31.211559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.997 [2024-07-26 11:17:31.211573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.997 qpair failed and we were unable to recover it. 00:29:11.997 [2024-07-26 11:17:31.211947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.997 [2024-07-26 11:17:31.211961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.997 qpair failed and we were unable to recover it. 00:29:11.997 [2024-07-26 11:17:31.212465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.997 [2024-07-26 11:17:31.212479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.997 qpair failed and we were unable to recover it. 00:29:11.997 [2024-07-26 11:17:31.212922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.997 [2024-07-26 11:17:31.212936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.997 qpair failed and we were unable to recover it. 00:29:11.997 [2024-07-26 11:17:31.213318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.997 [2024-07-26 11:17:31.213333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.997 qpair failed and we were unable to recover it. 00:29:11.997 [2024-07-26 11:17:31.213833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.997 [2024-07-26 11:17:31.213847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.997 qpair failed and we were unable to recover it. 00:29:11.997 [2024-07-26 11:17:31.214285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.997 [2024-07-26 11:17:31.214300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.997 qpair failed and we were unable to recover it. 00:29:11.997 [2024-07-26 11:17:31.214781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.997 [2024-07-26 11:17:31.214797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.997 qpair failed and we were unable to recover it. 00:29:11.997 [2024-07-26 11:17:31.215194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.997 [2024-07-26 11:17:31.215208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.997 qpair failed and we were unable to recover it. 00:29:11.997 [2024-07-26 11:17:31.215595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.997 [2024-07-26 11:17:31.215609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.997 qpair failed and we were unable to recover it. 00:29:11.997 [2024-07-26 11:17:31.215997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.997 [2024-07-26 11:17:31.216011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.997 qpair failed and we were unable to recover it. 00:29:11.997 [2024-07-26 11:17:31.216409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.997 [2024-07-26 11:17:31.216423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.997 qpair failed and we were unable to recover it. 00:29:11.997 [2024-07-26 11:17:31.216867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.997 [2024-07-26 11:17:31.216880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.997 qpair failed and we were unable to recover it. 00:29:11.997 [2024-07-26 11:17:31.217259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.997 [2024-07-26 11:17:31.217273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.997 qpair failed and we were unable to recover it. 00:29:11.997 [2024-07-26 11:17:31.217710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.997 [2024-07-26 11:17:31.217724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.997 qpair failed and we were unable to recover it. 00:29:11.997 [2024-07-26 11:17:31.218161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.997 [2024-07-26 11:17:31.218176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.997 qpair failed and we were unable to recover it. 00:29:11.997 [2024-07-26 11:17:31.218625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.997 [2024-07-26 11:17:31.218639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.997 qpair failed and we were unable to recover it. 00:29:11.997 [2024-07-26 11:17:31.219094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.997 [2024-07-26 11:17:31.219108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.997 qpair failed and we were unable to recover it. 00:29:11.997 [2024-07-26 11:17:31.219487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.997 [2024-07-26 11:17:31.219500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.997 qpair failed and we were unable to recover it. 00:29:11.997 [2024-07-26 11:17:31.219837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.997 [2024-07-26 11:17:31.219850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.997 qpair failed and we were unable to recover it. 00:29:11.997 [2024-07-26 11:17:31.220336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.997 [2024-07-26 11:17:31.220350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.997 qpair failed and we were unable to recover it. 00:29:11.997 [2024-07-26 11:17:31.220747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.997 [2024-07-26 11:17:31.220760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.997 qpair failed and we were unable to recover it. 00:29:11.997 [2024-07-26 11:17:31.221203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.997 [2024-07-26 11:17:31.221218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.997 qpair failed and we were unable to recover it. 00:29:11.998 [2024-07-26 11:17:31.221657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.998 [2024-07-26 11:17:31.221672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.998 qpair failed and we were unable to recover it. 00:29:11.998 [2024-07-26 11:17:31.222152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.998 [2024-07-26 11:17:31.222166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.998 qpair failed and we were unable to recover it. 00:29:11.998 [2024-07-26 11:17:31.222554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.998 [2024-07-26 11:17:31.222568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.998 qpair failed and we were unable to recover it. 00:29:11.998 [2024-07-26 11:17:31.223067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.998 [2024-07-26 11:17:31.223082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.998 qpair failed and we were unable to recover it. 00:29:11.998 [2024-07-26 11:17:31.223520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.998 [2024-07-26 11:17:31.223534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.998 qpair failed and we were unable to recover it. 00:29:11.998 [2024-07-26 11:17:31.223927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.998 [2024-07-26 11:17:31.223941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.998 qpair failed and we were unable to recover it. 00:29:11.998 [2024-07-26 11:17:31.224388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.998 [2024-07-26 11:17:31.224402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.998 qpair failed and we were unable to recover it. 00:29:11.998 [2024-07-26 11:17:31.224927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.998 [2024-07-26 11:17:31.224940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.998 qpair failed and we were unable to recover it. 00:29:11.998 [2024-07-26 11:17:31.225629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.998 [2024-07-26 11:17:31.225645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.998 qpair failed and we were unable to recover it. 00:29:11.998 [2024-07-26 11:17:31.226093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.998 [2024-07-26 11:17:31.226108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.998 qpair failed and we were unable to recover it. 00:29:11.998 [2024-07-26 11:17:31.226558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.998 [2024-07-26 11:17:31.226572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.998 qpair failed and we were unable to recover it. 00:29:11.998 [2024-07-26 11:17:31.226959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.998 [2024-07-26 11:17:31.226973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.998 qpair failed and we were unable to recover it. 00:29:11.998 [2024-07-26 11:17:31.227419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.998 [2024-07-26 11:17:31.227433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.998 qpair failed and we were unable to recover it. 00:29:11.998 [2024-07-26 11:17:31.227841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.998 [2024-07-26 11:17:31.227856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.998 qpair failed and we were unable to recover it. 00:29:11.998 [2024-07-26 11:17:31.228305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.998 [2024-07-26 11:17:31.228319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.998 qpair failed and we were unable to recover it. 00:29:11.998 [2024-07-26 11:17:31.228691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.998 [2024-07-26 11:17:31.228704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.998 qpair failed and we were unable to recover it. 00:29:11.998 [2024-07-26 11:17:31.229191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.998 [2024-07-26 11:17:31.229206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.998 qpair failed and we were unable to recover it. 00:29:11.998 [2024-07-26 11:17:31.229587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.998 [2024-07-26 11:17:31.229600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.998 qpair failed and we were unable to recover it. 00:29:11.998 [2024-07-26 11:17:31.229988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.998 [2024-07-26 11:17:31.230001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.998 qpair failed and we were unable to recover it. 00:29:11.998 [2024-07-26 11:17:31.230429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.998 [2024-07-26 11:17:31.230444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.998 qpair failed and we were unable to recover it. 00:29:11.998 [2024-07-26 11:17:31.230885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.998 [2024-07-26 11:17:31.230899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.998 qpair failed and we were unable to recover it. 00:29:11.998 [2024-07-26 11:17:31.231339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.998 [2024-07-26 11:17:31.231353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.998 qpair failed and we were unable to recover it. 00:29:11.998 [2024-07-26 11:17:31.231730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.998 [2024-07-26 11:17:31.231744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.998 qpair failed and we were unable to recover it. 00:29:11.998 [2024-07-26 11:17:31.232210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.998 [2024-07-26 11:17:31.232224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.998 qpair failed and we were unable to recover it. 00:29:11.998 [2024-07-26 11:17:31.232672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.998 [2024-07-26 11:17:31.232689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.998 qpair failed and we were unable to recover it. 00:29:11.998 [2024-07-26 11:17:31.233129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.998 [2024-07-26 11:17:31.233143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.998 qpair failed and we were unable to recover it. 00:29:11.998 [2024-07-26 11:17:31.233650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.998 [2024-07-26 11:17:31.233663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.998 qpair failed and we were unable to recover it. 00:29:11.998 [2024-07-26 11:17:31.234307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.998 [2024-07-26 11:17:31.234321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.998 qpair failed and we were unable to recover it. 00:29:11.998 [2024-07-26 11:17:31.234764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.998 [2024-07-26 11:17:31.234778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.998 qpair failed and we were unable to recover it. 00:29:11.998 [2024-07-26 11:17:31.235230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.998 [2024-07-26 11:17:31.235244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.998 qpair failed and we were unable to recover it. 00:29:11.998 [2024-07-26 11:17:31.235694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.998 [2024-07-26 11:17:31.235708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.998 qpair failed and we were unable to recover it. 00:29:11.998 [2024-07-26 11:17:31.236205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.998 [2024-07-26 11:17:31.236219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.998 qpair failed and we were unable to recover it. 00:29:11.998 [2024-07-26 11:17:31.236724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.998 [2024-07-26 11:17:31.236739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.998 qpair failed and we were unable to recover it. 00:29:11.998 [2024-07-26 11:17:31.237138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.998 [2024-07-26 11:17:31.237153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.998 qpair failed and we were unable to recover it. 00:29:11.998 [2024-07-26 11:17:31.237540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.998 [2024-07-26 11:17:31.237553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.998 qpair failed and we were unable to recover it. 00:29:11.998 [2024-07-26 11:17:31.237945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.998 [2024-07-26 11:17:31.237959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.998 qpair failed and we were unable to recover it. 00:29:11.998 [2024-07-26 11:17:31.238412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.999 [2024-07-26 11:17:31.238427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.999 qpair failed and we were unable to recover it. 00:29:11.999 [2024-07-26 11:17:31.238825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.999 [2024-07-26 11:17:31.238839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.999 qpair failed and we were unable to recover it. 00:29:11.999 [2024-07-26 11:17:31.239251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.999 [2024-07-26 11:17:31.239265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.999 qpair failed and we were unable to recover it. 00:29:11.999 [2024-07-26 11:17:31.239709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.999 [2024-07-26 11:17:31.239723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.999 qpair failed and we were unable to recover it. 00:29:11.999 [2024-07-26 11:17:31.240102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.999 [2024-07-26 11:17:31.240117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.999 qpair failed and we were unable to recover it. 00:29:11.999 [2024-07-26 11:17:31.240517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.999 [2024-07-26 11:17:31.240537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.999 qpair failed and we were unable to recover it. 00:29:11.999 [2024-07-26 11:17:31.240923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.999 [2024-07-26 11:17:31.240937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.999 qpair failed and we were unable to recover it. 00:29:11.999 [2024-07-26 11:17:31.241438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.999 [2024-07-26 11:17:31.241453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.999 qpair failed and we were unable to recover it. 00:29:11.999 [2024-07-26 11:17:31.241821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.999 [2024-07-26 11:17:31.241835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.999 qpair failed and we were unable to recover it. 00:29:11.999 [2024-07-26 11:17:31.242235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.999 [2024-07-26 11:17:31.242250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.999 qpair failed and we were unable to recover it. 00:29:11.999 [2024-07-26 11:17:31.242429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.999 [2024-07-26 11:17:31.242443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.999 qpair failed and we were unable to recover it. 00:29:11.999 [2024-07-26 11:17:31.242896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.999 [2024-07-26 11:17:31.242910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.999 qpair failed and we were unable to recover it. 00:29:11.999 [2024-07-26 11:17:31.243342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.999 [2024-07-26 11:17:31.243356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.999 qpair failed and we were unable to recover it. 00:29:11.999 [2024-07-26 11:17:31.243798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.999 [2024-07-26 11:17:31.243812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.999 qpair failed and we were unable to recover it. 00:29:11.999 [2024-07-26 11:17:31.244249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.999 [2024-07-26 11:17:31.244264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.999 qpair failed and we were unable to recover it. 00:29:11.999 [2024-07-26 11:17:31.244661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.999 [2024-07-26 11:17:31.244675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.999 qpair failed and we were unable to recover it. 00:29:11.999 [2024-07-26 11:17:31.245138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.999 [2024-07-26 11:17:31.245153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.999 qpair failed and we were unable to recover it. 00:29:11.999 [2024-07-26 11:17:31.245598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.999 [2024-07-26 11:17:31.245612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.999 qpair failed and we were unable to recover it. 00:29:11.999 [2024-07-26 11:17:31.246055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.999 [2024-07-26 11:17:31.246070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.999 qpair failed and we were unable to recover it. 00:29:11.999 [2024-07-26 11:17:31.246442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.999 [2024-07-26 11:17:31.246457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.999 qpair failed and we were unable to recover it. 00:29:11.999 [2024-07-26 11:17:31.246907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.999 [2024-07-26 11:17:31.246921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.999 qpair failed and we were unable to recover it. 00:29:11.999 [2024-07-26 11:17:31.247604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.999 [2024-07-26 11:17:31.247619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.999 qpair failed and we were unable to recover it. 00:29:11.999 [2024-07-26 11:17:31.247993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.999 [2024-07-26 11:17:31.248009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.999 qpair failed and we were unable to recover it. 00:29:11.999 [2024-07-26 11:17:31.248462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.999 [2024-07-26 11:17:31.248477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.999 qpair failed and we were unable to recover it. 00:29:11.999 [2024-07-26 11:17:31.248809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.999 [2024-07-26 11:17:31.248824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.999 qpair failed and we were unable to recover it. 00:29:11.999 [2024-07-26 11:17:31.249150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.999 [2024-07-26 11:17:31.249165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.999 qpair failed and we were unable to recover it. 00:29:11.999 [2024-07-26 11:17:31.249665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.999 [2024-07-26 11:17:31.249680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.999 qpair failed and we were unable to recover it. 00:29:11.999 [2024-07-26 11:17:31.250074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.999 [2024-07-26 11:17:31.250088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.999 qpair failed and we were unable to recover it. 00:29:11.999 [2024-07-26 11:17:31.250531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.999 [2024-07-26 11:17:31.250547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.999 qpair failed and we were unable to recover it. 00:29:11.999 [2024-07-26 11:17:31.250934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.999 [2024-07-26 11:17:31.250948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.999 qpair failed and we were unable to recover it. 00:29:11.999 [2024-07-26 11:17:31.251372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.999 [2024-07-26 11:17:31.251387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.999 qpair failed and we were unable to recover it. 00:29:11.999 [2024-07-26 11:17:31.251618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.999 [2024-07-26 11:17:31.251631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.999 qpair failed and we were unable to recover it. 00:29:11.999 [2024-07-26 11:17:31.251961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.999 [2024-07-26 11:17:31.251974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.999 qpair failed and we were unable to recover it. 00:29:11.999 [2024-07-26 11:17:31.252369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.999 [2024-07-26 11:17:31.252384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.999 qpair failed and we were unable to recover it. 00:29:11.999 [2024-07-26 11:17:31.252829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.999 [2024-07-26 11:17:31.252843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.999 qpair failed and we were unable to recover it. 00:29:11.999 [2024-07-26 11:17:31.253230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.999 [2024-07-26 11:17:31.253245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.999 qpair failed and we were unable to recover it. 00:29:11.999 [2024-07-26 11:17:31.253703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.999 [2024-07-26 11:17:31.253717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:11.999 qpair failed and we were unable to recover it. 00:29:11.999 [2024-07-26 11:17:31.254178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.000 [2024-07-26 11:17:31.254193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.000 qpair failed and we were unable to recover it. 00:29:12.000 [2024-07-26 11:17:31.254668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.000 [2024-07-26 11:17:31.254682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.000 qpair failed and we were unable to recover it. 00:29:12.000 [2024-07-26 11:17:31.255086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.000 [2024-07-26 11:17:31.255100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.000 qpair failed and we were unable to recover it. 00:29:12.000 [2024-07-26 11:17:31.255376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.000 [2024-07-26 11:17:31.255391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.000 qpair failed and we were unable to recover it. 00:29:12.000 [2024-07-26 11:17:31.255784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.000 [2024-07-26 11:17:31.255798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.000 qpair failed and we were unable to recover it. 00:29:12.000 [2024-07-26 11:17:31.256239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.000 [2024-07-26 11:17:31.256254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.000 qpair failed and we were unable to recover it. 00:29:12.000 [2024-07-26 11:17:31.256707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.000 [2024-07-26 11:17:31.256721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.000 qpair failed and we were unable to recover it. 00:29:12.000 [2024-07-26 11:17:31.257162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.000 [2024-07-26 11:17:31.257177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.000 qpair failed and we were unable to recover it. 00:29:12.000 [2024-07-26 11:17:31.257626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.000 [2024-07-26 11:17:31.257640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.000 qpair failed and we were unable to recover it. 00:29:12.000 [2024-07-26 11:17:31.258023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.000 [2024-07-26 11:17:31.258037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.000 qpair failed and we were unable to recover it. 00:29:12.000 [2024-07-26 11:17:31.258440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.000 [2024-07-26 11:17:31.258454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.000 qpair failed and we were unable to recover it. 00:29:12.000 [2024-07-26 11:17:31.258832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.000 [2024-07-26 11:17:31.258846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.000 qpair failed and we were unable to recover it. 00:29:12.000 [2024-07-26 11:17:31.259253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.000 [2024-07-26 11:17:31.259268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.000 qpair failed and we were unable to recover it. 00:29:12.000 [2024-07-26 11:17:31.259722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.000 [2024-07-26 11:17:31.259735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.000 qpair failed and we were unable to recover it. 00:29:12.000 [2024-07-26 11:17:31.260131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.000 [2024-07-26 11:17:31.260145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.000 qpair failed and we were unable to recover it. 00:29:12.000 [2024-07-26 11:17:31.260533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.000 [2024-07-26 11:17:31.260547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.000 qpair failed and we were unable to recover it. 00:29:12.000 [2024-07-26 11:17:31.260924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.000 [2024-07-26 11:17:31.260938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.000 qpair failed and we were unable to recover it. 00:29:12.000 [2024-07-26 11:17:31.261230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.000 [2024-07-26 11:17:31.261244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.000 qpair failed and we were unable to recover it. 00:29:12.000 [2024-07-26 11:17:31.261404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.000 [2024-07-26 11:17:31.261424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.000 qpair failed and we were unable to recover it. 00:29:12.000 [2024-07-26 11:17:31.261879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.000 [2024-07-26 11:17:31.261893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.000 qpair failed and we were unable to recover it. 00:29:12.000 [2024-07-26 11:17:31.262274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.000 [2024-07-26 11:17:31.262288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.000 qpair failed and we were unable to recover it. 00:29:12.000 [2024-07-26 11:17:31.262809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.000 [2024-07-26 11:17:31.262824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.000 qpair failed and we were unable to recover it. 00:29:12.000 [2024-07-26 11:17:31.263324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.000 [2024-07-26 11:17:31.263339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.000 qpair failed and we were unable to recover it. 00:29:12.000 [2024-07-26 11:17:31.263511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.000 [2024-07-26 11:17:31.263525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.000 qpair failed and we were unable to recover it. 00:29:12.000 [2024-07-26 11:17:31.263996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.000 [2024-07-26 11:17:31.264010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.000 qpair failed and we were unable to recover it. 00:29:12.000 [2024-07-26 11:17:31.264384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.001 [2024-07-26 11:17:31.264398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.001 qpair failed and we were unable to recover it. 00:29:12.001 [2024-07-26 11:17:31.264842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.001 [2024-07-26 11:17:31.264855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.001 qpair failed and we were unable to recover it. 00:29:12.001 [2024-07-26 11:17:31.265018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.001 [2024-07-26 11:17:31.265031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.001 qpair failed and we were unable to recover it. 00:29:12.001 [2024-07-26 11:17:31.265480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.001 [2024-07-26 11:17:31.265494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.001 qpair failed and we were unable to recover it. 00:29:12.001 [2024-07-26 11:17:31.265896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.001 [2024-07-26 11:17:31.265910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.001 qpair failed and we were unable to recover it. 00:29:12.001 [2024-07-26 11:17:31.266299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.001 [2024-07-26 11:17:31.266313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.001 qpair failed and we were unable to recover it. 00:29:12.001 [2024-07-26 11:17:31.266691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.001 [2024-07-26 11:17:31.266708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.001 qpair failed and we were unable to recover it. 00:29:12.001 [2024-07-26 11:17:31.267161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.001 [2024-07-26 11:17:31.267175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.001 qpair failed and we were unable to recover it. 00:29:12.001 [2024-07-26 11:17:31.267676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.001 [2024-07-26 11:17:31.267690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.001 qpair failed and we were unable to recover it. 00:29:12.001 [2024-07-26 11:17:31.268066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.001 [2024-07-26 11:17:31.268080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.001 qpair failed and we were unable to recover it. 00:29:12.001 [2024-07-26 11:17:31.268472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.001 [2024-07-26 11:17:31.268488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.001 qpair failed and we were unable to recover it. 00:29:12.001 [2024-07-26 11:17:31.268949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.001 [2024-07-26 11:17:31.268963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.001 qpair failed and we were unable to recover it. 00:29:12.001 [2024-07-26 11:17:31.269330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.001 [2024-07-26 11:17:31.269344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.001 qpair failed and we were unable to recover it. 00:29:12.001 [2024-07-26 11:17:31.269790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.001 [2024-07-26 11:17:31.269804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.001 qpair failed and we were unable to recover it. 00:29:12.001 [2024-07-26 11:17:31.270197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.001 [2024-07-26 11:17:31.270211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.001 qpair failed and we were unable to recover it. 00:29:12.001 [2024-07-26 11:17:31.270664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.001 [2024-07-26 11:17:31.270678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.001 qpair failed and we were unable to recover it. 00:29:12.001 [2024-07-26 11:17:31.271204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.001 [2024-07-26 11:17:31.271218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.001 qpair failed and we were unable to recover it. 00:29:12.001 [2024-07-26 11:17:31.271619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.001 [2024-07-26 11:17:31.271633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.001 qpair failed and we were unable to recover it. 00:29:12.001 [2024-07-26 11:17:31.272023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.001 [2024-07-26 11:17:31.272036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.001 qpair failed and we were unable to recover it. 00:29:12.001 [2024-07-26 11:17:31.272505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.001 [2024-07-26 11:17:31.272520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.001 qpair failed and we were unable to recover it. 00:29:12.001 [2024-07-26 11:17:31.272954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.001 [2024-07-26 11:17:31.272968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.001 qpair failed and we were unable to recover it. 00:29:12.001 [2024-07-26 11:17:31.273145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.001 [2024-07-26 11:17:31.273159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.001 qpair failed and we were unable to recover it. 00:29:12.001 [2024-07-26 11:17:31.273619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.001 [2024-07-26 11:17:31.273632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.001 qpair failed and we were unable to recover it. 00:29:12.001 [2024-07-26 11:17:31.274135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.001 [2024-07-26 11:17:31.274150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.001 qpair failed and we were unable to recover it. 00:29:12.001 [2024-07-26 11:17:31.274605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.001 [2024-07-26 11:17:31.274619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.001 qpair failed and we were unable to recover it. 00:29:12.001 [2024-07-26 11:17:31.275000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.001 [2024-07-26 11:17:31.275014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.001 qpair failed and we were unable to recover it. 00:29:12.001 [2024-07-26 11:17:31.275396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.001 [2024-07-26 11:17:31.275410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.001 qpair failed and we were unable to recover it. 00:29:12.001 [2024-07-26 11:17:31.275931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.001 [2024-07-26 11:17:31.275945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.001 qpair failed and we were unable to recover it. 00:29:12.001 [2024-07-26 11:17:31.276352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.001 [2024-07-26 11:17:31.276366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.001 qpair failed and we were unable to recover it. 00:29:12.001 [2024-07-26 11:17:31.276802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.001 [2024-07-26 11:17:31.276816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.001 qpair failed and we were unable to recover it. 00:29:12.001 [2024-07-26 11:17:31.277219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.001 [2024-07-26 11:17:31.277234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.001 qpair failed and we were unable to recover it. 00:29:12.001 [2024-07-26 11:17:31.277677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.001 [2024-07-26 11:17:31.277691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.001 qpair failed and we were unable to recover it. 00:29:12.001 [2024-07-26 11:17:31.278096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.001 [2024-07-26 11:17:31.278110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.001 qpair failed and we were unable to recover it. 00:29:12.001 [2024-07-26 11:17:31.278512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.001 [2024-07-26 11:17:31.278526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.001 qpair failed and we were unable to recover it. 00:29:12.002 [2024-07-26 11:17:31.279055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.002 [2024-07-26 11:17:31.279070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.002 qpair failed and we were unable to recover it. 00:29:12.002 [2024-07-26 11:17:31.279339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.002 [2024-07-26 11:17:31.279353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.002 qpair failed and we were unable to recover it. 00:29:12.002 [2024-07-26 11:17:31.279825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.002 [2024-07-26 11:17:31.279838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.002 qpair failed and we were unable to recover it. 00:29:12.002 [2024-07-26 11:17:31.280289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.002 [2024-07-26 11:17:31.280303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.002 qpair failed and we were unable to recover it. 00:29:12.002 [2024-07-26 11:17:31.280741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.002 [2024-07-26 11:17:31.280755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.002 qpair failed and we were unable to recover it. 00:29:12.002 [2024-07-26 11:17:31.281105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.002 [2024-07-26 11:17:31.281125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.002 qpair failed and we were unable to recover it. 00:29:12.002 [2024-07-26 11:17:31.281559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.002 [2024-07-26 11:17:31.281573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.002 qpair failed and we were unable to recover it. 00:29:12.002 [2024-07-26 11:17:31.282015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.002 [2024-07-26 11:17:31.282029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.002 qpair failed and we were unable to recover it. 00:29:12.002 [2024-07-26 11:17:31.282483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.002 [2024-07-26 11:17:31.282498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.002 qpair failed and we were unable to recover it. 00:29:12.002 [2024-07-26 11:17:31.282729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.002 [2024-07-26 11:17:31.282743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.002 qpair failed and we were unable to recover it. 00:29:12.002 [2024-07-26 11:17:31.283188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.002 [2024-07-26 11:17:31.283202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.002 qpair failed and we were unable to recover it. 00:29:12.002 [2024-07-26 11:17:31.283653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.002 [2024-07-26 11:17:31.283666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.002 qpair failed and we were unable to recover it. 00:29:12.002 [2024-07-26 11:17:31.283853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.002 [2024-07-26 11:17:31.283872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.002 qpair failed and we were unable to recover it. 00:29:12.002 [2024-07-26 11:17:31.284330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.002 [2024-07-26 11:17:31.284344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.002 qpair failed and we were unable to recover it. 00:29:12.002 [2024-07-26 11:17:31.284725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.002 [2024-07-26 11:17:31.284739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.002 qpair failed and we were unable to recover it. 00:29:12.002 [2024-07-26 11:17:31.285191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.002 [2024-07-26 11:17:31.285204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.002 qpair failed and we were unable to recover it. 00:29:12.002 [2024-07-26 11:17:31.285586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.002 [2024-07-26 11:17:31.285599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.002 qpair failed and we were unable to recover it. 00:29:12.002 [2024-07-26 11:17:31.286051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.002 [2024-07-26 11:17:31.286066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.002 qpair failed and we were unable to recover it. 00:29:12.002 [2024-07-26 11:17:31.286524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.002 [2024-07-26 11:17:31.286539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.002 qpair failed and we were unable to recover it. 00:29:12.002 [2024-07-26 11:17:31.286942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.002 [2024-07-26 11:17:31.286956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.002 qpair failed and we were unable to recover it. 00:29:12.002 [2024-07-26 11:17:31.287487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.002 [2024-07-26 11:17:31.287502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.002 qpair failed and we were unable to recover it. 00:29:12.002 [2024-07-26 11:17:31.287889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.002 [2024-07-26 11:17:31.287903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.002 qpair failed and we were unable to recover it. 00:29:12.002 [2024-07-26 11:17:31.288287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.002 [2024-07-26 11:17:31.288302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.002 qpair failed and we were unable to recover it. 00:29:12.002 [2024-07-26 11:17:31.288740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.002 [2024-07-26 11:17:31.288754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.002 qpair failed and we were unable to recover it. 00:29:12.002 [2024-07-26 11:17:31.288985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.002 [2024-07-26 11:17:31.288999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.002 qpair failed and we were unable to recover it. 00:29:12.002 [2024-07-26 11:17:31.289404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.002 [2024-07-26 11:17:31.289419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.002 qpair failed and we were unable to recover it. 00:29:12.002 [2024-07-26 11:17:31.289802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.002 [2024-07-26 11:17:31.289817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.002 qpair failed and we were unable to recover it. 00:29:12.002 [2024-07-26 11:17:31.290215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.002 [2024-07-26 11:17:31.290230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.002 qpair failed and we were unable to recover it. 00:29:12.002 [2024-07-26 11:17:31.290623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.002 [2024-07-26 11:17:31.290637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.002 qpair failed and we were unable to recover it. 00:29:12.002 [2024-07-26 11:17:31.291028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.002 [2024-07-26 11:17:31.291049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.002 qpair failed and we were unable to recover it. 00:29:12.002 [2024-07-26 11:17:31.291495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.002 [2024-07-26 11:17:31.291509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.002 qpair failed and we were unable to recover it. 00:29:12.002 [2024-07-26 11:17:31.292139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.002 [2024-07-26 11:17:31.292154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.002 qpair failed and we were unable to recover it. 00:29:12.002 [2024-07-26 11:17:31.292541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.002 [2024-07-26 11:17:31.292556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.002 qpair failed and we were unable to recover it. 00:29:12.002 [2024-07-26 11:17:31.292965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.002 [2024-07-26 11:17:31.292979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.002 qpair failed and we were unable to recover it. 00:29:12.002 [2024-07-26 11:17:31.293146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.002 [2024-07-26 11:17:31.293160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.002 qpair failed and we were unable to recover it. 00:29:12.002 [2024-07-26 11:17:31.293383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.003 [2024-07-26 11:17:31.293397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.003 qpair failed and we were unable to recover it. 00:29:12.003 [2024-07-26 11:17:31.293921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.003 [2024-07-26 11:17:31.293936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.003 qpair failed and we were unable to recover it. 00:29:12.003 [2024-07-26 11:17:31.294344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.003 [2024-07-26 11:17:31.294358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.003 qpair failed and we were unable to recover it. 00:29:12.003 [2024-07-26 11:17:31.294792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.003 [2024-07-26 11:17:31.294805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.003 qpair failed and we were unable to recover it. 00:29:12.003 [2024-07-26 11:17:31.295254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.003 [2024-07-26 11:17:31.295269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.003 qpair failed and we were unable to recover it. 00:29:12.003 [2024-07-26 11:17:31.295748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.003 [2024-07-26 11:17:31.295763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.003 qpair failed and we were unable to recover it. 00:29:12.003 [2024-07-26 11:17:31.296273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.003 [2024-07-26 11:17:31.296288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.003 qpair failed and we were unable to recover it. 00:29:12.003 [2024-07-26 11:17:31.296771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.003 [2024-07-26 11:17:31.296784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.003 qpair failed and we were unable to recover it. 00:29:12.003 [2024-07-26 11:17:31.297287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.003 [2024-07-26 11:17:31.297301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.003 qpair failed and we were unable to recover it. 00:29:12.003 [2024-07-26 11:17:31.297741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.003 [2024-07-26 11:17:31.297756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.003 qpair failed and we were unable to recover it. 00:29:12.003 [2024-07-26 11:17:31.298151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.003 [2024-07-26 11:17:31.298165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.003 qpair failed and we were unable to recover it. 00:29:12.003 [2024-07-26 11:17:31.298602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.003 [2024-07-26 11:17:31.298616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.003 qpair failed and we were unable to recover it. 00:29:12.003 [2024-07-26 11:17:31.299139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.003 [2024-07-26 11:17:31.299154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.003 qpair failed and we were unable to recover it. 00:29:12.003 [2024-07-26 11:17:31.299676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.003 [2024-07-26 11:17:31.299690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.003 qpair failed and we were unable to recover it. 00:29:12.003 [2024-07-26 11:17:31.300069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.003 [2024-07-26 11:17:31.300084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.003 qpair failed and we were unable to recover it. 00:29:12.003 [2024-07-26 11:17:31.300484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.003 [2024-07-26 11:17:31.300497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.003 qpair failed and we were unable to recover it. 00:29:12.003 [2024-07-26 11:17:31.300869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.003 [2024-07-26 11:17:31.300883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.003 qpair failed and we were unable to recover it. 00:29:12.003 [2024-07-26 11:17:31.301349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.003 [2024-07-26 11:17:31.301364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.003 qpair failed and we were unable to recover it. 00:29:12.003 [2024-07-26 11:17:31.301869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.003 [2024-07-26 11:17:31.301883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.003 qpair failed and we were unable to recover it. 00:29:12.003 [2024-07-26 11:17:31.302330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.003 [2024-07-26 11:17:31.302346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.003 qpair failed and we were unable to recover it. 00:29:12.003 [2024-07-26 11:17:31.302846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.003 [2024-07-26 11:17:31.302860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.003 qpair failed and we were unable to recover it. 00:29:12.003 [2024-07-26 11:17:31.303424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.003 [2024-07-26 11:17:31.303439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.003 qpair failed and we were unable to recover it. 00:29:12.003 [2024-07-26 11:17:31.303906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.003 [2024-07-26 11:17:31.303921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.003 qpair failed and we were unable to recover it. 00:29:12.003 [2024-07-26 11:17:31.304362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.003 [2024-07-26 11:17:31.304377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.003 qpair failed and we were unable to recover it. 00:29:12.003 [2024-07-26 11:17:31.304783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.003 [2024-07-26 11:17:31.304798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.003 qpair failed and we were unable to recover it. 00:29:12.003 [2024-07-26 11:17:31.305251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.003 [2024-07-26 11:17:31.305265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.003 qpair failed and we were unable to recover it. 00:29:12.003 [2024-07-26 11:17:31.305465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.003 [2024-07-26 11:17:31.305478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.003 qpair failed and we were unable to recover it. 00:29:12.003 [2024-07-26 11:17:31.305862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.003 [2024-07-26 11:17:31.305876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.003 qpair failed and we were unable to recover it. 00:29:12.003 [2024-07-26 11:17:31.306403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.003 [2024-07-26 11:17:31.306417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.003 qpair failed and we were unable to recover it. 00:29:12.003 [2024-07-26 11:17:31.306810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.003 [2024-07-26 11:17:31.306824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.003 qpair failed and we were unable to recover it. 00:29:12.003 [2024-07-26 11:17:31.307224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.003 [2024-07-26 11:17:31.307239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.003 qpair failed and we were unable to recover it. 00:29:12.003 [2024-07-26 11:17:31.307638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.003 [2024-07-26 11:17:31.307652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.003 qpair failed and we were unable to recover it. 00:29:12.003 [2024-07-26 11:17:31.308108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.003 [2024-07-26 11:17:31.308122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.003 qpair failed and we were unable to recover it. 00:29:12.003 [2024-07-26 11:17:31.308630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.003 [2024-07-26 11:17:31.308644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.003 qpair failed and we were unable to recover it. 00:29:12.003 [2024-07-26 11:17:31.308871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.003 [2024-07-26 11:17:31.308885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.003 qpair failed and we were unable to recover it. 00:29:12.003 [2024-07-26 11:17:31.309362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.003 [2024-07-26 11:17:31.309376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.003 qpair failed and we were unable to recover it. 00:29:12.003 [2024-07-26 11:17:31.309763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.004 [2024-07-26 11:17:31.309777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.004 qpair failed and we were unable to recover it. 00:29:12.004 [2024-07-26 11:17:31.310273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.004 [2024-07-26 11:17:31.310287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.004 qpair failed and we were unable to recover it. 00:29:12.004 [2024-07-26 11:17:31.310741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.004 [2024-07-26 11:17:31.310755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.004 qpair failed and we were unable to recover it. 00:29:12.004 [2024-07-26 11:17:31.310932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.004 [2024-07-26 11:17:31.310945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.004 qpair failed and we were unable to recover it. 00:29:12.004 [2024-07-26 11:17:31.311394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.004 [2024-07-26 11:17:31.311408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.004 qpair failed and we were unable to recover it. 00:29:12.004 [2024-07-26 11:17:31.312055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.004 [2024-07-26 11:17:31.312071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.004 qpair failed and we were unable to recover it. 00:29:12.004 [2024-07-26 11:17:31.312597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.004 [2024-07-26 11:17:31.312612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.004 qpair failed and we were unable to recover it. 00:29:12.004 [2024-07-26 11:17:31.313105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.004 [2024-07-26 11:17:31.313120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.004 qpair failed and we were unable to recover it. 00:29:12.004 [2024-07-26 11:17:31.313490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.004 [2024-07-26 11:17:31.313506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.004 qpair failed and we were unable to recover it. 00:29:12.004 [2024-07-26 11:17:31.313898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.004 [2024-07-26 11:17:31.313913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.004 qpair failed and we were unable to recover it. 00:29:12.004 [2024-07-26 11:17:31.314422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.004 [2024-07-26 11:17:31.314436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.004 qpair failed and we were unable to recover it. 00:29:12.004 [2024-07-26 11:17:31.314884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.004 [2024-07-26 11:17:31.314898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.004 qpair failed and we were unable to recover it. 00:29:12.004 [2024-07-26 11:17:31.315367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.004 [2024-07-26 11:17:31.315382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.004 qpair failed and we were unable to recover it. 00:29:12.004 [2024-07-26 11:17:31.315833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.004 [2024-07-26 11:17:31.315848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.004 qpair failed and we were unable to recover it. 00:29:12.004 [2024-07-26 11:17:31.316307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.004 [2024-07-26 11:17:31.316321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.004 qpair failed and we were unable to recover it. 00:29:12.004 [2024-07-26 11:17:31.316794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.004 [2024-07-26 11:17:31.316808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.004 qpair failed and we were unable to recover it. 00:29:12.004 [2024-07-26 11:17:31.317313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.004 [2024-07-26 11:17:31.317328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.004 qpair failed and we were unable to recover it. 00:29:12.004 [2024-07-26 11:17:31.317721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.004 [2024-07-26 11:17:31.317735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.004 qpair failed and we were unable to recover it. 00:29:12.004 [2024-07-26 11:17:31.318205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.004 [2024-07-26 11:17:31.318219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.004 qpair failed and we were unable to recover it. 00:29:12.004 [2024-07-26 11:17:31.318600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.004 [2024-07-26 11:17:31.318614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.004 qpair failed and we were unable to recover it. 00:29:12.004 [2024-07-26 11:17:31.319141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.004 [2024-07-26 11:17:31.319156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.004 qpair failed and we were unable to recover it. 00:29:12.004 [2024-07-26 11:17:31.319551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.004 [2024-07-26 11:17:31.319564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.004 qpair failed and we were unable to recover it. 00:29:12.004 [2024-07-26 11:17:31.319961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.004 [2024-07-26 11:17:31.319975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.004 qpair failed and we were unable to recover it. 00:29:12.004 [2024-07-26 11:17:31.320448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.004 [2024-07-26 11:17:31.320463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.004 qpair failed and we were unable to recover it. 00:29:12.004 [2024-07-26 11:17:31.320850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.004 [2024-07-26 11:17:31.320864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.004 qpair failed and we were unable to recover it. 00:29:12.004 [2024-07-26 11:17:31.321313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.004 [2024-07-26 11:17:31.321327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.004 qpair failed and we were unable to recover it. 00:29:12.004 [2024-07-26 11:17:31.321761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.004 [2024-07-26 11:17:31.321776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.004 qpair failed and we were unable to recover it. 00:29:12.004 [2024-07-26 11:17:31.322281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.004 [2024-07-26 11:17:31.322295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.004 qpair failed and we were unable to recover it. 00:29:12.004 [2024-07-26 11:17:31.322741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.004 [2024-07-26 11:17:31.322755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.004 qpair failed and we were unable to recover it. 00:29:12.004 [2024-07-26 11:17:31.322910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.004 [2024-07-26 11:17:31.322924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.004 qpair failed and we were unable to recover it. 00:29:12.004 [2024-07-26 11:17:31.323155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.004 [2024-07-26 11:17:31.323169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.004 qpair failed and we were unable to recover it. 00:29:12.004 [2024-07-26 11:17:31.323726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.004 [2024-07-26 11:17:31.323740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.004 qpair failed and we were unable to recover it. 00:29:12.004 [2024-07-26 11:17:31.324207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.004 [2024-07-26 11:17:31.324221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.004 qpair failed and we were unable to recover it. 00:29:12.004 [2024-07-26 11:17:31.324673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.004 [2024-07-26 11:17:31.324686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.004 qpair failed and we were unable to recover it. 00:29:12.004 [2024-07-26 11:17:31.325137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.004 [2024-07-26 11:17:31.325152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.004 qpair failed and we were unable to recover it. 00:29:12.005 [2024-07-26 11:17:31.325652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.005 [2024-07-26 11:17:31.325666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.005 qpair failed and we were unable to recover it. 00:29:12.005 [2024-07-26 11:17:31.326071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.005 [2024-07-26 11:17:31.326086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.005 qpair failed and we were unable to recover it. 00:29:12.005 [2024-07-26 11:17:31.326530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.005 [2024-07-26 11:17:31.326544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.005 qpair failed and we were unable to recover it. 00:29:12.005 [2024-07-26 11:17:31.326988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.005 [2024-07-26 11:17:31.327002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.005 qpair failed and we were unable to recover it. 00:29:12.005 [2024-07-26 11:17:31.327500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.005 [2024-07-26 11:17:31.327514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.005 qpair failed and we were unable to recover it. 00:29:12.005 [2024-07-26 11:17:31.327976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.005 [2024-07-26 11:17:31.327990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.005 qpair failed and we were unable to recover it. 00:29:12.005 [2024-07-26 11:17:31.328466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.005 [2024-07-26 11:17:31.328481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.005 qpair failed and we were unable to recover it. 00:29:12.005 [2024-07-26 11:17:31.329006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.005 [2024-07-26 11:17:31.329021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.005 qpair failed and we were unable to recover it. 00:29:12.005 [2024-07-26 11:17:31.329493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.005 [2024-07-26 11:17:31.329507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.005 qpair failed and we were unable to recover it. 00:29:12.005 [2024-07-26 11:17:31.330014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.005 [2024-07-26 11:17:31.330028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.005 qpair failed and we were unable to recover it. 00:29:12.005 [2024-07-26 11:17:31.330554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.005 [2024-07-26 11:17:31.330569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.005 qpair failed and we were unable to recover it. 00:29:12.005 [2024-07-26 11:17:31.330877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.005 [2024-07-26 11:17:31.330891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.005 qpair failed and we were unable to recover it. 00:29:12.005 [2024-07-26 11:17:31.331344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.005 [2024-07-26 11:17:31.331358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.005 qpair failed and we were unable to recover it. 00:29:12.005 [2024-07-26 11:17:31.331788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.005 [2024-07-26 11:17:31.331804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.005 qpair failed and we were unable to recover it. 00:29:12.005 [2024-07-26 11:17:31.332310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.005 [2024-07-26 11:17:31.332325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.005 qpair failed and we were unable to recover it. 00:29:12.005 [2024-07-26 11:17:31.332781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.005 [2024-07-26 11:17:31.332795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.005 qpair failed and we were unable to recover it. 00:29:12.005 [2024-07-26 11:17:31.333298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.005 [2024-07-26 11:17:31.333312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.005 qpair failed and we were unable to recover it. 00:29:12.005 [2024-07-26 11:17:31.333760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.005 [2024-07-26 11:17:31.333774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.005 qpair failed and we were unable to recover it. 00:29:12.005 [2024-07-26 11:17:31.334240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.005 [2024-07-26 11:17:31.334255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.005 qpair failed and we were unable to recover it. 00:29:12.005 [2024-07-26 11:17:31.334693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.005 [2024-07-26 11:17:31.334706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.005 qpair failed and we were unable to recover it. 00:29:12.005 [2024-07-26 11:17:31.335160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.005 [2024-07-26 11:17:31.335175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.005 qpair failed and we were unable to recover it. 00:29:12.005 [2024-07-26 11:17:31.335640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.005 [2024-07-26 11:17:31.335655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.005 qpair failed and we were unable to recover it. 00:29:12.005 [2024-07-26 11:17:31.335926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.005 [2024-07-26 11:17:31.335940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.005 qpair failed and we were unable to recover it. 00:29:12.005 [2024-07-26 11:17:31.336189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.005 [2024-07-26 11:17:31.336204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.005 qpair failed and we were unable to recover it. 00:29:12.005 [2024-07-26 11:17:31.336728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.005 [2024-07-26 11:17:31.336742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.005 qpair failed and we were unable to recover it. 00:29:12.005 [2024-07-26 11:17:31.337189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.005 [2024-07-26 11:17:31.337204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.005 qpair failed and we were unable to recover it. 00:29:12.005 [2024-07-26 11:17:31.337727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.005 [2024-07-26 11:17:31.337741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.005 qpair failed and we were unable to recover it. 00:29:12.005 [2024-07-26 11:17:31.338212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-07-26 11:17:31.338227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-07-26 11:17:31.338678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-07-26 11:17:31.338692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-07-26 11:17:31.339190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-07-26 11:17:31.339205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-07-26 11:17:31.339454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-07-26 11:17:31.339468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-07-26 11:17:31.339970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-07-26 11:17:31.339984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-07-26 11:17:31.340430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-07-26 11:17:31.340444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-07-26 11:17:31.340830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-07-26 11:17:31.340843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-07-26 11:17:31.341343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-07-26 11:17:31.341357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-07-26 11:17:31.341856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-07-26 11:17:31.341870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-07-26 11:17:31.342318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-07-26 11:17:31.342332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-07-26 11:17:31.342880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-07-26 11:17:31.342893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-07-26 11:17:31.343403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-07-26 11:17:31.343417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-07-26 11:17:31.343945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-07-26 11:17:31.343960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-07-26 11:17:31.344370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-07-26 11:17:31.344384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-07-26 11:17:31.344820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-07-26 11:17:31.344834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-07-26 11:17:31.345281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-07-26 11:17:31.345295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-07-26 11:17:31.345693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-07-26 11:17:31.345707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-07-26 11:17:31.346231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-07-26 11:17:31.346245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-07-26 11:17:31.346711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-07-26 11:17:31.346724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-07-26 11:17:31.347176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-07-26 11:17:31.347190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-07-26 11:17:31.347625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-07-26 11:17:31.347639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-07-26 11:17:31.348161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-07-26 11:17:31.348175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-07-26 11:17:31.348632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-07-26 11:17:31.348645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-07-26 11:17:31.349165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-07-26 11:17:31.349179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-07-26 11:17:31.349432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-07-26 11:17:31.349446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-07-26 11:17:31.349824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-07-26 11:17:31.349837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-07-26 11:17:31.350063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-07-26 11:17:31.350079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-07-26 11:17:31.350529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-07-26 11:17:31.350542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-07-26 11:17:31.350928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-07-26 11:17:31.350941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-07-26 11:17:31.351441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-07-26 11:17:31.351455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-07-26 11:17:31.351893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-07-26 11:17:31.351907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-07-26 11:17:31.352432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-07-26 11:17:31.352446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-07-26 11:17:31.352876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-07-26 11:17:31.352889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-07-26 11:17:31.353435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-07-26 11:17:31.353449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-07-26 11:17:31.353887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.006 [2024-07-26 11:17:31.353900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.006 qpair failed and we were unable to recover it. 00:29:12.006 [2024-07-26 11:17:31.354268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-07-26 11:17:31.354283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-07-26 11:17:31.354808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-07-26 11:17:31.354822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-07-26 11:17:31.355254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-07-26 11:17:31.355268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-07-26 11:17:31.355766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-07-26 11:17:31.355780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-07-26 11:17:31.356065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-07-26 11:17:31.356081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-07-26 11:17:31.356559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-07-26 11:17:31.356573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-07-26 11:17:31.357067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-07-26 11:17:31.357081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-07-26 11:17:31.357530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-07-26 11:17:31.357544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-07-26 11:17:31.358003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-07-26 11:17:31.358016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-07-26 11:17:31.358488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-07-26 11:17:31.358502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-07-26 11:17:31.358742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-07-26 11:17:31.358755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-07-26 11:17:31.359241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-07-26 11:17:31.359255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-07-26 11:17:31.359773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-07-26 11:17:31.359787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-07-26 11:17:31.360250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-07-26 11:17:31.360264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-07-26 11:17:31.360790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-07-26 11:17:31.360804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-07-26 11:17:31.361271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-07-26 11:17:31.361285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-07-26 11:17:31.361785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-07-26 11:17:31.361798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-07-26 11:17:31.362320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-07-26 11:17:31.362334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-07-26 11:17:31.362884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-07-26 11:17:31.362898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-07-26 11:17:31.363139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-07-26 11:17:31.363152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-07-26 11:17:31.363623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-07-26 11:17:31.363636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-07-26 11:17:31.364083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-07-26 11:17:31.364097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-07-26 11:17:31.364621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-07-26 11:17:31.364634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-07-26 11:17:31.364999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-07-26 11:17:31.365012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-07-26 11:17:31.365484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-07-26 11:17:31.365498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-07-26 11:17:31.365998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-07-26 11:17:31.366012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-07-26 11:17:31.366408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-07-26 11:17:31.366421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-07-26 11:17:31.366922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-07-26 11:17:31.366935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-07-26 11:17:31.367386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-07-26 11:17:31.367400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-07-26 11:17:31.367869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-07-26 11:17:31.367882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-07-26 11:17:31.368382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-07-26 11:17:31.368397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-07-26 11:17:31.368864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-07-26 11:17:31.368879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-07-26 11:17:31.369378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-07-26 11:17:31.369392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-07-26 11:17:31.369860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.007 [2024-07-26 11:17:31.369874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.007 qpair failed and we were unable to recover it. 00:29:12.007 [2024-07-26 11:17:31.370399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-07-26 11:17:31.370413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-07-26 11:17:31.370859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-07-26 11:17:31.370873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-07-26 11:17:31.371399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-07-26 11:17:31.371413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-07-26 11:17:31.371871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-07-26 11:17:31.371885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-07-26 11:17:31.372345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-07-26 11:17:31.372359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-07-26 11:17:31.372858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-07-26 11:17:31.372871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-07-26 11:17:31.373261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-07-26 11:17:31.373274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-07-26 11:17:31.373775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-07-26 11:17:31.373789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-07-26 11:17:31.374263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-07-26 11:17:31.374277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-07-26 11:17:31.374799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-07-26 11:17:31.374812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-07-26 11:17:31.375310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-07-26 11:17:31.375323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-07-26 11:17:31.375852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-07-26 11:17:31.375866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-07-26 11:17:31.376171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-07-26 11:17:31.376185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-07-26 11:17:31.376641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-07-26 11:17:31.376654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-07-26 11:17:31.377178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-07-26 11:17:31.377192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-07-26 11:17:31.377645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-07-26 11:17:31.377659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-07-26 11:17:31.378330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-07-26 11:17:31.378344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 11:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:12.008 [2024-07-26 11:17:31.378842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-07-26 11:17:31.378859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 11:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:29:12.008 [2024-07-26 11:17:31.379288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-07-26 11:17:31.379304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 11:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:12.008 11:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:12.008 [2024-07-26 11:17:31.379805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-07-26 11:17:31.379821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 11:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:12.008 [2024-07-26 11:17:31.380369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-07-26 11:17:31.380385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-07-26 11:17:31.380787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-07-26 11:17:31.380801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-07-26 11:17:31.381243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-07-26 11:17:31.381257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-07-26 11:17:31.381785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-07-26 11:17:31.381800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-07-26 11:17:31.382302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-07-26 11:17:31.382317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-07-26 11:17:31.382839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-07-26 11:17:31.382853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-07-26 11:17:31.383191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-07-26 11:17:31.383205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-07-26 11:17:31.383595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-07-26 11:17:31.383610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-07-26 11:17:31.384252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-07-26 11:17:31.384267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-07-26 11:17:31.384704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.008 [2024-07-26 11:17:31.384718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.008 qpair failed and we were unable to recover it. 00:29:12.008 [2024-07-26 11:17:31.385175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-07-26 11:17:31.385189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-07-26 11:17:31.385413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-07-26 11:17:31.385427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-07-26 11:17:31.385940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-07-26 11:17:31.385953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-07-26 11:17:31.386412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-07-26 11:17:31.386427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-07-26 11:17:31.386814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-07-26 11:17:31.386828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-07-26 11:17:31.387353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-07-26 11:17:31.387369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-07-26 11:17:31.387861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-07-26 11:17:31.387875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-07-26 11:17:31.388334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-07-26 11:17:31.388349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-07-26 11:17:31.388853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-07-26 11:17:31.388869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-07-26 11:17:31.389330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-07-26 11:17:31.389345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-07-26 11:17:31.389811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-07-26 11:17:31.389825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-07-26 11:17:31.390227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-07-26 11:17:31.390242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-07-26 11:17:31.390690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-07-26 11:17:31.390704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-07-26 11:17:31.391096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-07-26 11:17:31.391110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-07-26 11:17:31.391542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-07-26 11:17:31.391556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-07-26 11:17:31.392035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-07-26 11:17:31.392096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-07-26 11:17:31.392550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-07-26 11:17:31.392565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-07-26 11:17:31.392963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-07-26 11:17:31.392977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-07-26 11:17:31.393398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-07-26 11:17:31.393411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-07-26 11:17:31.393856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-07-26 11:17:31.393870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-07-26 11:17:31.394302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-07-26 11:17:31.394316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-07-26 11:17:31.394818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-07-26 11:17:31.394833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-07-26 11:17:31.395270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-07-26 11:17:31.395284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-07-26 11:17:31.395803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-07-26 11:17:31.395817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-07-26 11:17:31.396063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-07-26 11:17:31.396077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-07-26 11:17:31.396348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-07-26 11:17:31.396364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-07-26 11:17:31.396765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-07-26 11:17:31.396780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-07-26 11:17:31.397224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-07-26 11:17:31.397238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-07-26 11:17:31.397693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-07-26 11:17:31.397707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-07-26 11:17:31.398142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-07-26 11:17:31.398156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-07-26 11:17:31.398547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-07-26 11:17:31.398561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-07-26 11:17:31.399275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-07-26 11:17:31.399290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-07-26 11:17:31.399741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-07-26 11:17:31.399755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-07-26 11:17:31.400162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-07-26 11:17:31.400177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-07-26 11:17:31.400686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-07-26 11:17:31.400700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.009 [2024-07-26 11:17:31.401157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.009 [2024-07-26 11:17:31.401172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.009 qpair failed and we were unable to recover it. 00:29:12.010 [2024-07-26 11:17:31.401565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-07-26 11:17:31.401578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-07-26 11:17:31.402018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-07-26 11:17:31.402032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-07-26 11:17:31.402488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-07-26 11:17:31.402502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-07-26 11:17:31.402893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-07-26 11:17:31.402907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-07-26 11:17:31.403299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-07-26 11:17:31.403314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-07-26 11:17:31.403759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-07-26 11:17:31.403773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-07-26 11:17:31.404230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-07-26 11:17:31.404244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-07-26 11:17:31.404744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-07-26 11:17:31.404758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-07-26 11:17:31.405229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-07-26 11:17:31.405243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-07-26 11:17:31.405643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-07-26 11:17:31.405660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-07-26 11:17:31.406103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-07-26 11:17:31.406117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-07-26 11:17:31.406506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-07-26 11:17:31.406520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-07-26 11:17:31.406969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-07-26 11:17:31.406982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-07-26 11:17:31.407426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-07-26 11:17:31.407440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-07-26 11:17:31.407888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-07-26 11:17:31.407902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-07-26 11:17:31.408357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-07-26 11:17:31.408372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-07-26 11:17:31.408823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-07-26 11:17:31.408839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-07-26 11:17:31.409280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-07-26 11:17:31.409295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-07-26 11:17:31.409698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-07-26 11:17:31.409712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-07-26 11:17:31.410183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-07-26 11:17:31.410197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-07-26 11:17:31.410584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-07-26 11:17:31.410598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-07-26 11:17:31.410978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-07-26 11:17:31.410992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-07-26 11:17:31.411374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-07-26 11:17:31.411388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-07-26 11:17:31.411826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-07-26 11:17:31.411841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-07-26 11:17:31.412222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-07-26 11:17:31.412237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-07-26 11:17:31.412735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-07-26 11:17:31.412749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-07-26 11:17:31.413201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-07-26 11:17:31.413215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 [2024-07-26 11:17:31.413606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-07-26 11:17:31.413619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 11:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:12.010 [2024-07-26 11:17:31.413995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-07-26 11:17:31.414011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 11:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:12.010 [2024-07-26 11:17:31.414397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.010 [2024-07-26 11:17:31.414414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.010 qpair failed and we were unable to recover it. 00:29:12.010 11:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.011 11:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:12.011 [2024-07-26 11:17:31.414747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-07-26 11:17:31.414762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-07-26 11:17:31.415210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-07-26 11:17:31.415224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-07-26 11:17:31.415747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-07-26 11:17:31.415761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-07-26 11:17:31.416465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-07-26 11:17:31.416480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-07-26 11:17:31.416939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-07-26 11:17:31.416953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-07-26 11:17:31.417407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-07-26 11:17:31.417422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-07-26 11:17:31.417866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-07-26 11:17:31.417879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-07-26 11:17:31.418325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-07-26 11:17:31.418339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-07-26 11:17:31.418722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-07-26 11:17:31.418735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-07-26 11:17:31.419118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-07-26 11:17:31.419132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-07-26 11:17:31.419594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-07-26 11:17:31.419607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-07-26 11:17:31.419994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-07-26 11:17:31.420008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-07-26 11:17:31.420532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-07-26 11:17:31.420547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-07-26 11:17:31.421013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-07-26 11:17:31.421027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-07-26 11:17:31.421486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-07-26 11:17:31.421501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-07-26 11:17:31.421977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-07-26 11:17:31.421991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-07-26 11:17:31.422374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-07-26 11:17:31.422390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-07-26 11:17:31.422835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-07-26 11:17:31.422852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-07-26 11:17:31.423294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-07-26 11:17:31.423309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-07-26 11:17:31.423551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-07-26 11:17:31.423565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-07-26 11:17:31.424017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-07-26 11:17:31.424031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-07-26 11:17:31.424589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-07-26 11:17:31.424604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-07-26 11:17:31.425131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-07-26 11:17:31.425146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-07-26 11:17:31.425598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-07-26 11:17:31.425613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-07-26 11:17:31.426007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-07-26 11:17:31.426021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-07-26 11:17:31.426414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-07-26 11:17:31.426429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-07-26 11:17:31.426868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-07-26 11:17:31.426884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-07-26 11:17:31.427412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-07-26 11:17:31.427427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-07-26 11:17:31.427873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-07-26 11:17:31.427888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-07-26 11:17:31.428334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-07-26 11:17:31.428349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.011 qpair failed and we were unable to recover it. 00:29:12.011 [2024-07-26 11:17:31.428741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.011 [2024-07-26 11:17:31.428757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-07-26 11:17:31.429210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-07-26 11:17:31.429226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-07-26 11:17:31.429672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-07-26 11:17:31.429688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-07-26 11:17:31.430135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-07-26 11:17:31.430151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-07-26 11:17:31.430653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-07-26 11:17:31.430668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-07-26 11:17:31.431136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-07-26 11:17:31.431154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-07-26 11:17:31.431526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-07-26 11:17:31.431540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-07-26 11:17:31.431931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-07-26 11:17:31.431946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-07-26 11:17:31.432467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-07-26 11:17:31.432483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-07-26 11:17:31.432927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-07-26 11:17:31.432940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-07-26 11:17:31.433381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-07-26 11:17:31.433395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-07-26 11:17:31.433842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-07-26 11:17:31.433856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 Malloc0 00:29:12.012 11:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.012 [2024-07-26 11:17:31.434312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-07-26 11:17:31.434327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 11:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:12.012 11:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.012 11:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:12.012 [2024-07-26 11:17:31.434825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-07-26 11:17:31.434838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-07-26 11:17:31.435383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-07-26 11:17:31.435398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-07-26 11:17:31.435778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-07-26 11:17:31.435791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-07-26 11:17:31.436188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-07-26 11:17:31.436202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-07-26 11:17:31.436574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-07-26 11:17:31.436587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-07-26 11:17:31.437050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-07-26 11:17:31.437065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-07-26 11:17:31.437456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-07-26 11:17:31.437469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-07-26 11:17:31.437474] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:12.012 [2024-07-26 11:17:31.437988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-07-26 11:17:31.438002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-07-26 11:17:31.438451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-07-26 11:17:31.438465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-07-26 11:17:31.438931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-07-26 11:17:31.438945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-07-26 11:17:31.439466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-07-26 11:17:31.439480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-07-26 11:17:31.439943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-07-26 11:17:31.439957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-07-26 11:17:31.440481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-07-26 11:17:31.440498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-07-26 11:17:31.440946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-07-26 11:17:31.440959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-07-26 11:17:31.441472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-07-26 11:17:31.441486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-07-26 11:17:31.441923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-07-26 11:17:31.441936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 [2024-07-26 11:17:31.442330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.012 [2024-07-26 11:17:31.442344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.012 qpair failed and we were unable to recover it. 00:29:12.012 11:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.013 11:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:12.013 [2024-07-26 11:17:31.442865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-07-26 11:17:31.442880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 11:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.013 11:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:12.013 [2024-07-26 11:17:31.443326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-07-26 11:17:31.443340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-07-26 11:17:31.443808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-07-26 11:17:31.443822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-07-26 11:17:31.444322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-07-26 11:17:31.444336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-07-26 11:17:31.444793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-07-26 11:17:31.444806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-07-26 11:17:31.445280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-07-26 11:17:31.445293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-07-26 11:17:31.445790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-07-26 11:17:31.445804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-07-26 11:17:31.446281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-07-26 11:17:31.446295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-07-26 11:17:31.446735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-07-26 11:17:31.446748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-07-26 11:17:31.447198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-07-26 11:17:31.447212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-07-26 11:17:31.447710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-07-26 11:17:31.447723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-07-26 11:17:31.448176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-07-26 11:17:31.448190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-07-26 11:17:31.448689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-07-26 11:17:31.448702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-07-26 11:17:31.449202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-07-26 11:17:31.449216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-07-26 11:17:31.449665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-07-26 11:17:31.449679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-07-26 11:17:31.450212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-07-26 11:17:31.450226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 11:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.013 [2024-07-26 11:17:31.450661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-07-26 11:17:31.450676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 11:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:12.013 [2024-07-26 11:17:31.450843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-07-26 11:17:31.450857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 11:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.013 11:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:12.013 [2024-07-26 11:17:31.451312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-07-26 11:17:31.451327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-07-26 11:17:31.451718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-07-26 11:17:31.451732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-07-26 11:17:31.452233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-07-26 11:17:31.452247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-07-26 11:17:31.452693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-07-26 11:17:31.452706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-07-26 11:17:31.453098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-07-26 11:17:31.453112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-07-26 11:17:31.453526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-07-26 11:17:31.453539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-07-26 11:17:31.454064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-07-26 11:17:31.454077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-07-26 11:17:31.454599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-07-26 11:17:31.454612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-07-26 11:17:31.455070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-07-26 11:17:31.455084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-07-26 11:17:31.455463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-07-26 11:17:31.455476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-07-26 11:17:31.455952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-07-26 11:17:31.455965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-07-26 11:17:31.456406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-07-26 11:17:31.456420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-07-26 11:17:31.456798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-07-26 11:17:31.456811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-07-26 11:17:31.457310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.013 [2024-07-26 11:17:31.457324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.013 qpair failed and we were unable to recover it. 00:29:12.013 [2024-07-26 11:17:31.457775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.014 [2024-07-26 11:17:31.457789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.014 [2024-07-26 11:17:31.458311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.014 [2024-07-26 11:17:31.458325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.014 [2024-07-26 11:17:31.458769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.014 [2024-07-26 11:17:31.458782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.014 11:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.014 [2024-07-26 11:17:31.459285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.014 [2024-07-26 11:17:31.459299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.014 11:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:12.014 11:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.014 [2024-07-26 11:17:31.459796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.014 [2024-07-26 11:17:31.459810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.014 11:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:12.014 [2024-07-26 11:17:31.460255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.014 [2024-07-26 11:17:31.460269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.014 [2024-07-26 11:17:31.460793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.014 [2024-07-26 11:17:31.460807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.014 [2024-07-26 11:17:31.461344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.014 [2024-07-26 11:17:31.461358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.014 [2024-07-26 11:17:31.461812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.014 [2024-07-26 11:17:31.461826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.014 [2024-07-26 11:17:31.462327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.014 [2024-07-26 11:17:31.462340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.014 [2024-07-26 11:17:31.462517] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:12.014 [2024-07-26 11:17:31.462743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.014 [2024-07-26 11:17:31.462756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3448000b90 with addr=10.0.0.2, port=4420 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.014 11:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.014 11:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:12.014 11:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.014 11:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:12.014 [2024-07-26 11:17:31.468148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.014 [2024-07-26 11:17:31.468367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.014 [2024-07-26 11:17:31.468396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.014 [2024-07-26 11:17:31.468408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.014 [2024-07-26 11:17:31.468417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:12.014 [2024-07-26 11:17:31.468446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.014 11:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.014 11:17:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1612091 00:29:12.014 [2024-07-26 11:17:31.478152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.014 [2024-07-26 11:17:31.478402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.014 [2024-07-26 11:17:31.478426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.014 [2024-07-26 11:17:31.478437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.014 [2024-07-26 11:17:31.478446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:12.014 [2024-07-26 11:17:31.478470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.014 qpair failed and we were unable to recover it. 00:29:12.276 [2024-07-26 11:17:31.488082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.276 [2024-07-26 11:17:31.488265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.276 [2024-07-26 11:17:31.488285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.276 [2024-07-26 11:17:31.488293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.276 [2024-07-26 11:17:31.488300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:12.276 [2024-07-26 11:17:31.488319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-07-26 11:17:31.498277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.276 [2024-07-26 11:17:31.498434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.276 [2024-07-26 11:17:31.498456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.276 [2024-07-26 11:17:31.498463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.276 [2024-07-26 11:17:31.498469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:12.276 [2024-07-26 11:17:31.498485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-07-26 11:17:31.508091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.276 [2024-07-26 11:17:31.508251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.276 [2024-07-26 11:17:31.508270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.276 [2024-07-26 11:17:31.508278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.276 [2024-07-26 11:17:31.508285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:12.276 [2024-07-26 11:17:31.508302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-07-26 11:17:31.518177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.276 [2024-07-26 11:17:31.518384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.276 [2024-07-26 11:17:31.518415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.276 [2024-07-26 11:17:31.518428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.276 [2024-07-26 11:17:31.518438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.276 [2024-07-26 11:17:31.518465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-07-26 11:17:31.528153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.276 [2024-07-26 11:17:31.528310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.276 [2024-07-26 11:17:31.528330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.276 [2024-07-26 11:17:31.528339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.276 [2024-07-26 11:17:31.528346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.276 [2024-07-26 11:17:31.528366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-07-26 11:17:31.538155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.276 [2024-07-26 11:17:31.538312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.276 [2024-07-26 11:17:31.538331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.276 [2024-07-26 11:17:31.538338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.276 [2024-07-26 11:17:31.538345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.276 [2024-07-26 11:17:31.538362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-07-26 11:17:31.548175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.276 [2024-07-26 11:17:31.548329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.276 [2024-07-26 11:17:31.548346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.276 [2024-07-26 11:17:31.548354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.276 [2024-07-26 11:17:31.548359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.276 [2024-07-26 11:17:31.548377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-07-26 11:17:31.558209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.276 [2024-07-26 11:17:31.558358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.276 [2024-07-26 11:17:31.558377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.276 [2024-07-26 11:17:31.558385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.276 [2024-07-26 11:17:31.558391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.276 [2024-07-26 11:17:31.558408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.276 qpair failed and we were unable to recover it. 00:29:12.276 [2024-07-26 11:17:31.568239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.276 [2024-07-26 11:17:31.568403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.277 [2024-07-26 11:17:31.568421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.277 [2024-07-26 11:17:31.568429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.277 [2024-07-26 11:17:31.568434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.277 [2024-07-26 11:17:31.568452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-07-26 11:17:31.578260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.277 [2024-07-26 11:17:31.578412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.277 [2024-07-26 11:17:31.578431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.277 [2024-07-26 11:17:31.578439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.277 [2024-07-26 11:17:31.578444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.277 [2024-07-26 11:17:31.578462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-07-26 11:17:31.588328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.277 [2024-07-26 11:17:31.588496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.277 [2024-07-26 11:17:31.588517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.277 [2024-07-26 11:17:31.588525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.277 [2024-07-26 11:17:31.588532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.277 [2024-07-26 11:17:31.588549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-07-26 11:17:31.598337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.277 [2024-07-26 11:17:31.598489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.277 [2024-07-26 11:17:31.598507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.277 [2024-07-26 11:17:31.598514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.277 [2024-07-26 11:17:31.598521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.277 [2024-07-26 11:17:31.598539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-07-26 11:17:31.608358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.277 [2024-07-26 11:17:31.608507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.277 [2024-07-26 11:17:31.608525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.277 [2024-07-26 11:17:31.608533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.277 [2024-07-26 11:17:31.608539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.277 [2024-07-26 11:17:31.608557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-07-26 11:17:31.618307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.277 [2024-07-26 11:17:31.618467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.277 [2024-07-26 11:17:31.618486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.277 [2024-07-26 11:17:31.618493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.277 [2024-07-26 11:17:31.618500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.277 [2024-07-26 11:17:31.618517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-07-26 11:17:31.628424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.277 [2024-07-26 11:17:31.628576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.277 [2024-07-26 11:17:31.628593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.277 [2024-07-26 11:17:31.628600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.277 [2024-07-26 11:17:31.628606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.277 [2024-07-26 11:17:31.628627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-07-26 11:17:31.638456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.277 [2024-07-26 11:17:31.638611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.277 [2024-07-26 11:17:31.638629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.277 [2024-07-26 11:17:31.638636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.277 [2024-07-26 11:17:31.638643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.277 [2024-07-26 11:17:31.638660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-07-26 11:17:31.648472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.277 [2024-07-26 11:17:31.648632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.277 [2024-07-26 11:17:31.648651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.277 [2024-07-26 11:17:31.648658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.277 [2024-07-26 11:17:31.648664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.277 [2024-07-26 11:17:31.648682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-07-26 11:17:31.658456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.277 [2024-07-26 11:17:31.658610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.277 [2024-07-26 11:17:31.658629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.277 [2024-07-26 11:17:31.658636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.277 [2024-07-26 11:17:31.658642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.277 [2024-07-26 11:17:31.658659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-07-26 11:17:31.668533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.277 [2024-07-26 11:17:31.668686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.277 [2024-07-26 11:17:31.668704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.277 [2024-07-26 11:17:31.668710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.277 [2024-07-26 11:17:31.668716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.277 [2024-07-26 11:17:31.668734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-07-26 11:17:31.678571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.277 [2024-07-26 11:17:31.678719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.277 [2024-07-26 11:17:31.678741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.277 [2024-07-26 11:17:31.678748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.277 [2024-07-26 11:17:31.678754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.277 [2024-07-26 11:17:31.678772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.277 qpair failed and we were unable to recover it. 00:29:12.277 [2024-07-26 11:17:31.688520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.278 [2024-07-26 11:17:31.688678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.278 [2024-07-26 11:17:31.688697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.278 [2024-07-26 11:17:31.688705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.278 [2024-07-26 11:17:31.688712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.278 [2024-07-26 11:17:31.688729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.278 qpair failed and we were unable to recover it. 00:29:12.278 [2024-07-26 11:17:31.698698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.278 [2024-07-26 11:17:31.698863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.278 [2024-07-26 11:17:31.698882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.278 [2024-07-26 11:17:31.698889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.278 [2024-07-26 11:17:31.698895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.278 [2024-07-26 11:17:31.698912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.278 qpair failed and we were unable to recover it. 00:29:12.278 [2024-07-26 11:17:31.708700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.278 [2024-07-26 11:17:31.708857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.278 [2024-07-26 11:17:31.708875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.278 [2024-07-26 11:17:31.708882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.278 [2024-07-26 11:17:31.708888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.278 [2024-07-26 11:17:31.708906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.278 qpair failed and we were unable to recover it. 00:29:12.278 [2024-07-26 11:17:31.718768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.278 [2024-07-26 11:17:31.718955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.278 [2024-07-26 11:17:31.718975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.278 [2024-07-26 11:17:31.718982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.278 [2024-07-26 11:17:31.718988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.278 [2024-07-26 11:17:31.719009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.278 qpair failed and we were unable to recover it. 00:29:12.278 [2024-07-26 11:17:31.728759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.278 [2024-07-26 11:17:31.728910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.278 [2024-07-26 11:17:31.728930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.278 [2024-07-26 11:17:31.728937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.278 [2024-07-26 11:17:31.728943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.278 [2024-07-26 11:17:31.728960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.278 qpair failed and we were unable to recover it. 00:29:12.278 [2024-07-26 11:17:31.738752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.278 [2024-07-26 11:17:31.738924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.278 [2024-07-26 11:17:31.738943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.278 [2024-07-26 11:17:31.738950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.278 [2024-07-26 11:17:31.738956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.278 [2024-07-26 11:17:31.738973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.278 qpair failed and we were unable to recover it. 00:29:12.278 [2024-07-26 11:17:31.748764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.278 [2024-07-26 11:17:31.748918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.278 [2024-07-26 11:17:31.748936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.278 [2024-07-26 11:17:31.748943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.278 [2024-07-26 11:17:31.748949] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.278 [2024-07-26 11:17:31.748966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.278 qpair failed and we were unable to recover it. 00:29:12.278 [2024-07-26 11:17:31.758837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.278 [2024-07-26 11:17:31.758992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.278 [2024-07-26 11:17:31.759010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.278 [2024-07-26 11:17:31.759017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.278 [2024-07-26 11:17:31.759023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.278 [2024-07-26 11:17:31.759040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.278 qpair failed and we were unable to recover it. 00:29:12.278 [2024-07-26 11:17:31.768841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.278 [2024-07-26 11:17:31.769026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.278 [2024-07-26 11:17:31.769060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.278 [2024-07-26 11:17:31.769074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.278 [2024-07-26 11:17:31.769085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.278 [2024-07-26 11:17:31.769112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.278 qpair failed and we were unable to recover it. 00:29:12.540 [2024-07-26 11:17:31.778854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.540 [2024-07-26 11:17:31.779008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.540 [2024-07-26 11:17:31.779027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.540 [2024-07-26 11:17:31.779035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.540 [2024-07-26 11:17:31.779047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.540 [2024-07-26 11:17:31.779065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.540 qpair failed and we were unable to recover it. 00:29:12.540 [2024-07-26 11:17:31.788877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.540 [2024-07-26 11:17:31.789031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.540 [2024-07-26 11:17:31.789055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.540 [2024-07-26 11:17:31.789063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.540 [2024-07-26 11:17:31.789069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.540 [2024-07-26 11:17:31.789087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.540 qpair failed and we were unable to recover it. 00:29:12.540 [2024-07-26 11:17:31.798892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.540 [2024-07-26 11:17:31.799049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.540 [2024-07-26 11:17:31.799068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.540 [2024-07-26 11:17:31.799076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.540 [2024-07-26 11:17:31.799083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.540 [2024-07-26 11:17:31.799100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.540 qpair failed and we were unable to recover it. 00:29:12.540 [2024-07-26 11:17:31.808908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.540 [2024-07-26 11:17:31.809067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.540 [2024-07-26 11:17:31.809085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.540 [2024-07-26 11:17:31.809092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.540 [2024-07-26 11:17:31.809102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.540 [2024-07-26 11:17:31.809119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.540 qpair failed and we were unable to recover it. 00:29:12.540 [2024-07-26 11:17:31.818988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.540 [2024-07-26 11:17:31.819145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.540 [2024-07-26 11:17:31.819164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.540 [2024-07-26 11:17:31.819171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.541 [2024-07-26 11:17:31.819177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.541 [2024-07-26 11:17:31.819195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.541 qpair failed and we were unable to recover it. 00:29:12.541 [2024-07-26 11:17:31.829017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.541 [2024-07-26 11:17:31.829181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.541 [2024-07-26 11:17:31.829199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.541 [2024-07-26 11:17:31.829207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.541 [2024-07-26 11:17:31.829212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.541 [2024-07-26 11:17:31.829230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.541 qpair failed and we were unable to recover it. 00:29:12.541 [2024-07-26 11:17:31.839014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.541 [2024-07-26 11:17:31.839164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.541 [2024-07-26 11:17:31.839182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.541 [2024-07-26 11:17:31.839189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.541 [2024-07-26 11:17:31.839195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.541 [2024-07-26 11:17:31.839213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.541 qpair failed and we were unable to recover it. 00:29:12.541 [2024-07-26 11:17:31.849054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.541 [2024-07-26 11:17:31.849206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.541 [2024-07-26 11:17:31.849224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.541 [2024-07-26 11:17:31.849231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.541 [2024-07-26 11:17:31.849237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.541 [2024-07-26 11:17:31.849255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.541 qpair failed and we were unable to recover it. 00:29:12.541 [2024-07-26 11:17:31.859074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.541 [2024-07-26 11:17:31.859232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.541 [2024-07-26 11:17:31.859250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.541 [2024-07-26 11:17:31.859257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.541 [2024-07-26 11:17:31.859263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.541 [2024-07-26 11:17:31.859280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.541 qpair failed and we were unable to recover it. 00:29:12.541 [2024-07-26 11:17:31.869094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.541 [2024-07-26 11:17:31.869248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.541 [2024-07-26 11:17:31.869266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.541 [2024-07-26 11:17:31.869273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.541 [2024-07-26 11:17:31.869280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.541 [2024-07-26 11:17:31.869297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.541 qpair failed and we were unable to recover it. 00:29:12.541 [2024-07-26 11:17:31.879119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.541 [2024-07-26 11:17:31.879269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.541 [2024-07-26 11:17:31.879288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.541 [2024-07-26 11:17:31.879295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.541 [2024-07-26 11:17:31.879301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.541 [2024-07-26 11:17:31.879318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.541 qpair failed and we were unable to recover it. 00:29:12.541 [2024-07-26 11:17:31.889145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.541 [2024-07-26 11:17:31.889297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.541 [2024-07-26 11:17:31.889316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.541 [2024-07-26 11:17:31.889323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.541 [2024-07-26 11:17:31.889329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.541 [2024-07-26 11:17:31.889347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.541 qpair failed and we were unable to recover it. 00:29:12.541 [2024-07-26 11:17:31.899188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.541 [2024-07-26 11:17:31.899339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.541 [2024-07-26 11:17:31.899357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.541 [2024-07-26 11:17:31.899364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.541 [2024-07-26 11:17:31.899374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.541 [2024-07-26 11:17:31.899390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.541 qpair failed and we were unable to recover it. 00:29:12.541 [2024-07-26 11:17:31.909208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.541 [2024-07-26 11:17:31.909585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.541 [2024-07-26 11:17:31.909603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.541 [2024-07-26 11:17:31.909609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.541 [2024-07-26 11:17:31.909615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.541 [2024-07-26 11:17:31.909632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.541 qpair failed and we were unable to recover it. 00:29:12.541 [2024-07-26 11:17:31.919259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.541 [2024-07-26 11:17:31.919417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.541 [2024-07-26 11:17:31.919436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.541 [2024-07-26 11:17:31.919443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.541 [2024-07-26 11:17:31.919449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.541 [2024-07-26 11:17:31.919468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.541 qpair failed and we were unable to recover it. 00:29:12.541 [2024-07-26 11:17:31.929260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.541 [2024-07-26 11:17:31.929409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.541 [2024-07-26 11:17:31.929428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.541 [2024-07-26 11:17:31.929435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.541 [2024-07-26 11:17:31.929441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.541 [2024-07-26 11:17:31.929458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.541 qpair failed and we were unable to recover it. 00:29:12.541 [2024-07-26 11:17:31.939293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.541 [2024-07-26 11:17:31.939443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.541 [2024-07-26 11:17:31.939462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.541 [2024-07-26 11:17:31.939469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.541 [2024-07-26 11:17:31.939475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.541 [2024-07-26 11:17:31.939492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.541 qpair failed and we were unable to recover it. 00:29:12.541 [2024-07-26 11:17:31.949327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.541 [2024-07-26 11:17:31.949481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.541 [2024-07-26 11:17:31.949500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.541 [2024-07-26 11:17:31.949507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.541 [2024-07-26 11:17:31.949513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.542 [2024-07-26 11:17:31.949530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.542 qpair failed and we were unable to recover it. 00:29:12.542 [2024-07-26 11:17:31.959354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.542 [2024-07-26 11:17:31.959507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.542 [2024-07-26 11:17:31.959525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.542 [2024-07-26 11:17:31.959532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.542 [2024-07-26 11:17:31.959539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.542 [2024-07-26 11:17:31.959555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.542 qpair failed and we were unable to recover it. 00:29:12.542 [2024-07-26 11:17:31.969584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.542 [2024-07-26 11:17:31.969741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.542 [2024-07-26 11:17:31.969760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.542 [2024-07-26 11:17:31.969766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.542 [2024-07-26 11:17:31.969772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.542 [2024-07-26 11:17:31.969789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.542 qpair failed and we were unable to recover it. 00:29:12.542 [2024-07-26 11:17:31.979430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.542 [2024-07-26 11:17:31.979581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.542 [2024-07-26 11:17:31.979600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.542 [2024-07-26 11:17:31.979607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.542 [2024-07-26 11:17:31.979613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.542 [2024-07-26 11:17:31.979630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.542 qpair failed and we were unable to recover it. 00:29:12.542 [2024-07-26 11:17:31.989444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.542 [2024-07-26 11:17:31.989592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.542 [2024-07-26 11:17:31.989610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.542 [2024-07-26 11:17:31.989617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.542 [2024-07-26 11:17:31.989627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.542 [2024-07-26 11:17:31.989644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.542 qpair failed and we were unable to recover it. 00:29:12.542 [2024-07-26 11:17:31.999478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.542 [2024-07-26 11:17:31.999623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.542 [2024-07-26 11:17:31.999642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.542 [2024-07-26 11:17:31.999649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.542 [2024-07-26 11:17:31.999655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.542 [2024-07-26 11:17:31.999672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.542 qpair failed and we were unable to recover it. 00:29:12.542 [2024-07-26 11:17:32.009506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.542 [2024-07-26 11:17:32.009655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.542 [2024-07-26 11:17:32.009673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.542 [2024-07-26 11:17:32.009680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.542 [2024-07-26 11:17:32.009686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.542 [2024-07-26 11:17:32.009704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.542 qpair failed and we were unable to recover it. 00:29:12.542 [2024-07-26 11:17:32.019544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.542 [2024-07-26 11:17:32.019695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.542 [2024-07-26 11:17:32.019713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.542 [2024-07-26 11:17:32.019720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.542 [2024-07-26 11:17:32.019726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.542 [2024-07-26 11:17:32.019743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.542 qpair failed and we were unable to recover it. 00:29:12.542 [2024-07-26 11:17:32.029570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.542 [2024-07-26 11:17:32.029722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.542 [2024-07-26 11:17:32.029741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.542 [2024-07-26 11:17:32.029748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.542 [2024-07-26 11:17:32.029754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:12.542 [2024-07-26 11:17:32.029771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.542 qpair failed and we were unable to recover it. 00:29:12.804 [2024-07-26 11:17:32.039613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.804 [2024-07-26 11:17:32.039819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.804 [2024-07-26 11:17:32.039848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.804 [2024-07-26 11:17:32.039860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.804 [2024-07-26 11:17:32.039870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:12.804 [2024-07-26 11:17:32.039896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.804 qpair failed and we were unable to recover it. 00:29:12.804 [2024-07-26 11:17:32.049630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.804 [2024-07-26 11:17:32.049782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.804 [2024-07-26 11:17:32.049801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.804 [2024-07-26 11:17:32.049809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.804 [2024-07-26 11:17:32.049817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:12.804 [2024-07-26 11:17:32.049836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.804 qpair failed and we were unable to recover it. 00:29:12.804 [2024-07-26 11:17:32.059584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.804 [2024-07-26 11:17:32.059738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.804 [2024-07-26 11:17:32.059757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.804 [2024-07-26 11:17:32.059764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.804 [2024-07-26 11:17:32.059770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:12.804 [2024-07-26 11:17:32.059788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.804 qpair failed and we were unable to recover it. 00:29:12.804 [2024-07-26 11:17:32.069679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.804 [2024-07-26 11:17:32.069834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.804 [2024-07-26 11:17:32.069853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.804 [2024-07-26 11:17:32.069860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.804 [2024-07-26 11:17:32.069867] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:12.804 [2024-07-26 11:17:32.069884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.804 qpair failed and we were unable to recover it. 00:29:12.804 [2024-07-26 11:17:32.079712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.804 [2024-07-26 11:17:32.079865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.804 [2024-07-26 11:17:32.079884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.804 [2024-07-26 11:17:32.079896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.804 [2024-07-26 11:17:32.079903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:12.804 [2024-07-26 11:17:32.079920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.804 qpair failed and we were unable to recover it. 00:29:12.804 [2024-07-26 11:17:32.089748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.804 [2024-07-26 11:17:32.089898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.804 [2024-07-26 11:17:32.089916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.804 [2024-07-26 11:17:32.089923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.804 [2024-07-26 11:17:32.089929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:12.804 [2024-07-26 11:17:32.089947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.804 qpair failed and we were unable to recover it. 00:29:12.804 [2024-07-26 11:17:32.099782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.804 [2024-07-26 11:17:32.099938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.804 [2024-07-26 11:17:32.099956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.804 [2024-07-26 11:17:32.099963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.804 [2024-07-26 11:17:32.099970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:12.804 [2024-07-26 11:17:32.099989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.804 qpair failed and we were unable to recover it. 00:29:12.804 [2024-07-26 11:17:32.109822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.804 [2024-07-26 11:17:32.109990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.804 [2024-07-26 11:17:32.110008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.804 [2024-07-26 11:17:32.110016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.804 [2024-07-26 11:17:32.110023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:12.804 [2024-07-26 11:17:32.110048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.804 qpair failed and we were unable to recover it. 00:29:12.804 [2024-07-26 11:17:32.119826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.804 [2024-07-26 11:17:32.119978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.804 [2024-07-26 11:17:32.119997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.804 [2024-07-26 11:17:32.120004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.804 [2024-07-26 11:17:32.120010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:12.804 [2024-07-26 11:17:32.120028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.804 qpair failed and we were unable to recover it. 00:29:12.804 [2024-07-26 11:17:32.129865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.804 [2024-07-26 11:17:32.130015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.804 [2024-07-26 11:17:32.130033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.805 [2024-07-26 11:17:32.130040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.805 [2024-07-26 11:17:32.130053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:12.805 [2024-07-26 11:17:32.130070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.805 qpair failed and we were unable to recover it. 00:29:12.805 [2024-07-26 11:17:32.139894] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.805 [2024-07-26 11:17:32.140055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.805 [2024-07-26 11:17:32.140073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.805 [2024-07-26 11:17:32.140080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.805 [2024-07-26 11:17:32.140086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:12.805 [2024-07-26 11:17:32.140104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.805 qpair failed and we were unable to recover it. 00:29:12.805 [2024-07-26 11:17:32.149847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.805 [2024-07-26 11:17:32.149999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.805 [2024-07-26 11:17:32.150017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.805 [2024-07-26 11:17:32.150025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.805 [2024-07-26 11:17:32.150032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:12.805 [2024-07-26 11:17:32.150054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.805 qpair failed and we were unable to recover it. 00:29:12.805 [2024-07-26 11:17:32.159902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.805 [2024-07-26 11:17:32.160057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.805 [2024-07-26 11:17:32.160074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.805 [2024-07-26 11:17:32.160081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.805 [2024-07-26 11:17:32.160087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:12.805 [2024-07-26 11:17:32.160105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.805 qpair failed and we were unable to recover it. 00:29:12.805 [2024-07-26 11:17:32.169977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.805 [2024-07-26 11:17:32.170137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.805 [2024-07-26 11:17:32.170161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.805 [2024-07-26 11:17:32.170168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.805 [2024-07-26 11:17:32.170174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:12.805 [2024-07-26 11:17:32.170192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.805 qpair failed and we were unable to recover it. 00:29:12.805 [2024-07-26 11:17:32.180238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.805 [2024-07-26 11:17:32.180409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.805 [2024-07-26 11:17:32.180428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.805 [2024-07-26 11:17:32.180435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.805 [2024-07-26 11:17:32.180443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:12.805 [2024-07-26 11:17:32.180460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.805 qpair failed and we were unable to recover it. 00:29:12.805 [2024-07-26 11:17:32.190028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.805 [2024-07-26 11:17:32.190183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.805 [2024-07-26 11:17:32.190202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.805 [2024-07-26 11:17:32.190209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.805 [2024-07-26 11:17:32.190216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:12.805 [2024-07-26 11:17:32.190234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.805 qpair failed and we were unable to recover it. 00:29:12.805 [2024-07-26 11:17:32.200079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.805 [2024-07-26 11:17:32.200223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.805 [2024-07-26 11:17:32.200242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.805 [2024-07-26 11:17:32.200250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.805 [2024-07-26 11:17:32.200257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:12.805 [2024-07-26 11:17:32.200276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.805 qpair failed and we were unable to recover it. 00:29:12.805 [2024-07-26 11:17:32.210080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.805 [2024-07-26 11:17:32.210222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.805 [2024-07-26 11:17:32.210241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.805 [2024-07-26 11:17:32.210249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.805 [2024-07-26 11:17:32.210256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:12.805 [2024-07-26 11:17:32.210277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.805 qpair failed and we were unable to recover it. 00:29:12.805 [2024-07-26 11:17:32.220127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.805 [2024-07-26 11:17:32.220283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.805 [2024-07-26 11:17:32.220302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.805 [2024-07-26 11:17:32.220310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.805 [2024-07-26 11:17:32.220317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:12.805 [2024-07-26 11:17:32.220335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.805 qpair failed and we were unable to recover it. 00:29:12.805 [2024-07-26 11:17:32.230147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.805 [2024-07-26 11:17:32.230303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.805 [2024-07-26 11:17:32.230322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.805 [2024-07-26 11:17:32.230330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.805 [2024-07-26 11:17:32.230338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:12.805 [2024-07-26 11:17:32.230356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.805 qpair failed and we were unable to recover it. 00:29:12.805 [2024-07-26 11:17:32.240199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.805 [2024-07-26 11:17:32.240355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.805 [2024-07-26 11:17:32.240374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.805 [2024-07-26 11:17:32.240382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.805 [2024-07-26 11:17:32.240389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:12.805 [2024-07-26 11:17:32.240406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.805 qpair failed and we were unable to recover it. 00:29:12.805 [2024-07-26 11:17:32.250178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.805 [2024-07-26 11:17:32.250355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.805 [2024-07-26 11:17:32.250374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.805 [2024-07-26 11:17:32.250381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.805 [2024-07-26 11:17:32.250388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:12.805 [2024-07-26 11:17:32.250405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.805 qpair failed and we were unable to recover it. 00:29:12.805 [2024-07-26 11:17:32.260242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.805 [2024-07-26 11:17:32.260398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.805 [2024-07-26 11:17:32.260421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.805 [2024-07-26 11:17:32.260429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.805 [2024-07-26 11:17:32.260435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:12.806 [2024-07-26 11:17:32.260452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.806 qpair failed and we were unable to recover it. 00:29:12.806 [2024-07-26 11:17:32.270256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.806 [2024-07-26 11:17:32.270418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.806 [2024-07-26 11:17:32.270437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.806 [2024-07-26 11:17:32.270444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.806 [2024-07-26 11:17:32.270451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:12.806 [2024-07-26 11:17:32.270468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.806 qpair failed and we were unable to recover it. 00:29:12.806 [2024-07-26 11:17:32.280228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.806 [2024-07-26 11:17:32.280372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.806 [2024-07-26 11:17:32.280391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.806 [2024-07-26 11:17:32.280398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.806 [2024-07-26 11:17:32.280404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:12.806 [2024-07-26 11:17:32.280422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.806 qpair failed and we were unable to recover it. 00:29:12.806 [2024-07-26 11:17:32.290352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.806 [2024-07-26 11:17:32.290527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.806 [2024-07-26 11:17:32.290545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.806 [2024-07-26 11:17:32.290552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.806 [2024-07-26 11:17:32.290558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:12.806 [2024-07-26 11:17:32.290576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.806 qpair failed and we were unable to recover it. 00:29:13.067 [2024-07-26 11:17:32.300369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.067 [2024-07-26 11:17:32.300520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.067 [2024-07-26 11:17:32.300538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.067 [2024-07-26 11:17:32.300545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.067 [2024-07-26 11:17:32.300553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.067 [2024-07-26 11:17:32.300575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.067 qpair failed and we were unable to recover it. 00:29:13.067 [2024-07-26 11:17:32.310374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.067 [2024-07-26 11:17:32.310523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.067 [2024-07-26 11:17:32.310541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.067 [2024-07-26 11:17:32.310548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.067 [2024-07-26 11:17:32.310554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.067 [2024-07-26 11:17:32.310572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.067 qpair failed and we were unable to recover it. 00:29:13.067 [2024-07-26 11:17:32.320424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.067 [2024-07-26 11:17:32.320598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.067 [2024-07-26 11:17:32.320615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.067 [2024-07-26 11:17:32.320623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.067 [2024-07-26 11:17:32.320629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.067 [2024-07-26 11:17:32.320646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.067 qpair failed and we were unable to recover it. 00:29:13.067 [2024-07-26 11:17:32.330426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.068 [2024-07-26 11:17:32.330575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.068 [2024-07-26 11:17:32.330594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.068 [2024-07-26 11:17:32.330601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.068 [2024-07-26 11:17:32.330607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.068 [2024-07-26 11:17:32.330625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.068 qpair failed and we were unable to recover it. 00:29:13.068 [2024-07-26 11:17:32.340458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.068 [2024-07-26 11:17:32.340650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.068 [2024-07-26 11:17:32.340669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.068 [2024-07-26 11:17:32.340676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.068 [2024-07-26 11:17:32.340682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.068 [2024-07-26 11:17:32.340700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.068 qpair failed and we were unable to recover it. 00:29:13.068 [2024-07-26 11:17:32.350452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.068 [2024-07-26 11:17:32.350605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.068 [2024-07-26 11:17:32.350624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.068 [2024-07-26 11:17:32.350631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.068 [2024-07-26 11:17:32.350637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.068 [2024-07-26 11:17:32.350655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.068 qpair failed and we were unable to recover it. 00:29:13.068 [2024-07-26 11:17:32.360447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.068 [2024-07-26 11:17:32.360597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.068 [2024-07-26 11:17:32.360615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.068 [2024-07-26 11:17:32.360623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.068 [2024-07-26 11:17:32.360629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.068 [2024-07-26 11:17:32.360647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.068 qpair failed and we were unable to recover it. 00:29:13.068 [2024-07-26 11:17:32.370462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.068 [2024-07-26 11:17:32.370613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.068 [2024-07-26 11:17:32.370631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.068 [2024-07-26 11:17:32.370639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.068 [2024-07-26 11:17:32.370645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.068 [2024-07-26 11:17:32.370662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.068 qpair failed and we were unable to recover it. 00:29:13.068 [2024-07-26 11:17:32.380497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.068 [2024-07-26 11:17:32.380650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.068 [2024-07-26 11:17:32.380668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.068 [2024-07-26 11:17:32.380676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.068 [2024-07-26 11:17:32.380682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.068 [2024-07-26 11:17:32.380699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.068 qpair failed and we were unable to recover it. 00:29:13.068 [2024-07-26 11:17:32.390522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.068 [2024-07-26 11:17:32.390675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.068 [2024-07-26 11:17:32.390693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.068 [2024-07-26 11:17:32.390701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.068 [2024-07-26 11:17:32.390711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.068 [2024-07-26 11:17:32.390729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.068 qpair failed and we were unable to recover it. 00:29:13.068 [2024-07-26 11:17:32.400629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.068 [2024-07-26 11:17:32.400788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.068 [2024-07-26 11:17:32.400806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.068 [2024-07-26 11:17:32.400814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.068 [2024-07-26 11:17:32.400820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.068 [2024-07-26 11:17:32.400839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.068 qpair failed and we were unable to recover it. 00:29:13.068 [2024-07-26 11:17:32.410652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.068 [2024-07-26 11:17:32.410799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.068 [2024-07-26 11:17:32.410817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.068 [2024-07-26 11:17:32.410824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.068 [2024-07-26 11:17:32.410830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.068 [2024-07-26 11:17:32.410848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.068 qpair failed and we were unable to recover it. 00:29:13.068 [2024-07-26 11:17:32.420658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.068 [2024-07-26 11:17:32.420808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.068 [2024-07-26 11:17:32.420826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.068 [2024-07-26 11:17:32.420833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.068 [2024-07-26 11:17:32.420839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.068 [2024-07-26 11:17:32.420857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.068 qpair failed and we were unable to recover it. 00:29:13.068 [2024-07-26 11:17:32.430622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.068 [2024-07-26 11:17:32.430781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.068 [2024-07-26 11:17:32.430800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.068 [2024-07-26 11:17:32.430807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.068 [2024-07-26 11:17:32.430813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.068 [2024-07-26 11:17:32.430830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.068 qpair failed and we were unable to recover it. 00:29:13.068 [2024-07-26 11:17:32.440707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.068 [2024-07-26 11:17:32.440857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.068 [2024-07-26 11:17:32.440875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.068 [2024-07-26 11:17:32.440882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.068 [2024-07-26 11:17:32.440888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.068 [2024-07-26 11:17:32.440905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.068 qpair failed and we were unable to recover it. 00:29:13.068 [2024-07-26 11:17:32.450758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.068 [2024-07-26 11:17:32.450902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.068 [2024-07-26 11:17:32.450919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.068 [2024-07-26 11:17:32.450926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.068 [2024-07-26 11:17:32.450932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.068 [2024-07-26 11:17:32.450949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.068 qpair failed and we were unable to recover it. 00:29:13.068 [2024-07-26 11:17:32.460797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.069 [2024-07-26 11:17:32.460983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.069 [2024-07-26 11:17:32.461002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.069 [2024-07-26 11:17:32.461008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.069 [2024-07-26 11:17:32.461014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.069 [2024-07-26 11:17:32.461032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.069 qpair failed and we were unable to recover it. 00:29:13.069 [2024-07-26 11:17:32.470828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.069 [2024-07-26 11:17:32.470984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.069 [2024-07-26 11:17:32.471002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.069 [2024-07-26 11:17:32.471009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.069 [2024-07-26 11:17:32.471015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.069 [2024-07-26 11:17:32.471033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.069 qpair failed and we were unable to recover it. 00:29:13.069 [2024-07-26 11:17:32.480840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.069 [2024-07-26 11:17:32.480990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.069 [2024-07-26 11:17:32.481008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.069 [2024-07-26 11:17:32.481018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.069 [2024-07-26 11:17:32.481025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.069 [2024-07-26 11:17:32.481041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.069 qpair failed and we were unable to recover it. 00:29:13.069 [2024-07-26 11:17:32.490877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.069 [2024-07-26 11:17:32.491029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.069 [2024-07-26 11:17:32.491054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.069 [2024-07-26 11:17:32.491062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.069 [2024-07-26 11:17:32.491068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.069 [2024-07-26 11:17:32.491086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.069 qpair failed and we were unable to recover it. 00:29:13.069 [2024-07-26 11:17:32.500905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.069 [2024-07-26 11:17:32.501061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.069 [2024-07-26 11:17:32.501080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.069 [2024-07-26 11:17:32.501087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.069 [2024-07-26 11:17:32.501093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.069 [2024-07-26 11:17:32.501110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.069 qpair failed and we were unable to recover it. 00:29:13.069 [2024-07-26 11:17:32.510927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.069 [2024-07-26 11:17:32.511082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.069 [2024-07-26 11:17:32.511099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.069 [2024-07-26 11:17:32.511106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.069 [2024-07-26 11:17:32.511112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.069 [2024-07-26 11:17:32.511130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.069 qpair failed and we were unable to recover it. 00:29:13.069 [2024-07-26 11:17:32.520965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.069 [2024-07-26 11:17:32.521121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.069 [2024-07-26 11:17:32.521139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.069 [2024-07-26 11:17:32.521146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.069 [2024-07-26 11:17:32.521152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.069 [2024-07-26 11:17:32.521170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.069 qpair failed and we were unable to recover it. 00:29:13.069 [2024-07-26 11:17:32.531000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.069 [2024-07-26 11:17:32.531153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.069 [2024-07-26 11:17:32.531172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.069 [2024-07-26 11:17:32.531179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.069 [2024-07-26 11:17:32.531185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.069 [2024-07-26 11:17:32.531202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.069 qpair failed and we were unable to recover it. 00:29:13.069 [2024-07-26 11:17:32.541029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.069 [2024-07-26 11:17:32.541186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.069 [2024-07-26 11:17:32.541204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.069 [2024-07-26 11:17:32.541211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.069 [2024-07-26 11:17:32.541217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.069 [2024-07-26 11:17:32.541235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.069 qpair failed and we were unable to recover it. 00:29:13.069 [2024-07-26 11:17:32.551031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.069 [2024-07-26 11:17:32.551182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.069 [2024-07-26 11:17:32.551199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.069 [2024-07-26 11:17:32.551207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.069 [2024-07-26 11:17:32.551213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.069 [2024-07-26 11:17:32.551230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.069 qpair failed and we were unable to recover it. 00:29:13.331 [2024-07-26 11:17:32.561109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.331 [2024-07-26 11:17:32.561265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.331 [2024-07-26 11:17:32.561283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.331 [2024-07-26 11:17:32.561291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.331 [2024-07-26 11:17:32.561298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.331 [2024-07-26 11:17:32.561316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.331 qpair failed and we were unable to recover it. 00:29:13.331 [2024-07-26 11:17:32.571131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.331 [2024-07-26 11:17:32.571285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.331 [2024-07-26 11:17:32.571303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.331 [2024-07-26 11:17:32.571314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.331 [2024-07-26 11:17:32.571321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.331 [2024-07-26 11:17:32.571338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.331 qpair failed and we were unable to recover it. 00:29:13.331 [2024-07-26 11:17:32.581071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.331 [2024-07-26 11:17:32.581222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.331 [2024-07-26 11:17:32.581240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.331 [2024-07-26 11:17:32.581247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.331 [2024-07-26 11:17:32.581253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.331 [2024-07-26 11:17:32.581271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.331 qpair failed and we were unable to recover it. 00:29:13.331 [2024-07-26 11:17:32.591151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.331 [2024-07-26 11:17:32.591304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.331 [2024-07-26 11:17:32.591322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.331 [2024-07-26 11:17:32.591329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.331 [2024-07-26 11:17:32.591335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.331 [2024-07-26 11:17:32.591353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.331 qpair failed and we were unable to recover it. 00:29:13.331 [2024-07-26 11:17:32.601200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.331 [2024-07-26 11:17:32.601348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.331 [2024-07-26 11:17:32.601366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.331 [2024-07-26 11:17:32.601373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.331 [2024-07-26 11:17:32.601379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.331 [2024-07-26 11:17:32.601396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.331 qpair failed and we were unable to recover it. 00:29:13.331 [2024-07-26 11:17:32.611220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.331 [2024-07-26 11:17:32.611409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.331 [2024-07-26 11:17:32.611427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.331 [2024-07-26 11:17:32.611434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.331 [2024-07-26 11:17:32.611441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.331 [2024-07-26 11:17:32.611458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.331 qpair failed and we were unable to recover it. 00:29:13.331 [2024-07-26 11:17:32.621237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.331 [2024-07-26 11:17:32.621387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.331 [2024-07-26 11:17:32.621405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.332 [2024-07-26 11:17:32.621412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.332 [2024-07-26 11:17:32.621418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.332 [2024-07-26 11:17:32.621435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.332 qpair failed and we were unable to recover it. 00:29:13.332 [2024-07-26 11:17:32.631309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.332 [2024-07-26 11:17:32.631464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.332 [2024-07-26 11:17:32.631482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.332 [2024-07-26 11:17:32.631490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.332 [2024-07-26 11:17:32.631497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.332 [2024-07-26 11:17:32.631515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.332 qpair failed and we were unable to recover it. 00:29:13.332 [2024-07-26 11:17:32.641287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.332 [2024-07-26 11:17:32.641438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.332 [2024-07-26 11:17:32.641455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.332 [2024-07-26 11:17:32.641463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.332 [2024-07-26 11:17:32.641469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.332 [2024-07-26 11:17:32.641488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.332 qpair failed and we were unable to recover it. 00:29:13.332 [2024-07-26 11:17:32.651274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.332 [2024-07-26 11:17:32.651427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.332 [2024-07-26 11:17:32.651445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.332 [2024-07-26 11:17:32.651452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.332 [2024-07-26 11:17:32.651458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.332 [2024-07-26 11:17:32.651476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.332 qpair failed and we were unable to recover it. 00:29:13.332 [2024-07-26 11:17:32.661397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.332 [2024-07-26 11:17:32.661547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.332 [2024-07-26 11:17:32.661568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.332 [2024-07-26 11:17:32.661576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.332 [2024-07-26 11:17:32.661582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.332 [2024-07-26 11:17:32.661599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.332 qpair failed and we were unable to recover it. 00:29:13.332 [2024-07-26 11:17:32.671399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.332 [2024-07-26 11:17:32.671548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.332 [2024-07-26 11:17:32.671567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.332 [2024-07-26 11:17:32.671574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.332 [2024-07-26 11:17:32.671580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.332 [2024-07-26 11:17:32.671598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.332 qpair failed and we were unable to recover it. 00:29:13.332 [2024-07-26 11:17:32.681337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.332 [2024-07-26 11:17:32.681532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.332 [2024-07-26 11:17:32.681550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.332 [2024-07-26 11:17:32.681557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.332 [2024-07-26 11:17:32.681563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.332 [2024-07-26 11:17:32.681580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.332 qpair failed and we were unable to recover it. 00:29:13.332 [2024-07-26 11:17:32.691439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.332 [2024-07-26 11:17:32.691590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.332 [2024-07-26 11:17:32.691608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.332 [2024-07-26 11:17:32.691615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.332 [2024-07-26 11:17:32.691621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.332 [2024-07-26 11:17:32.691638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.332 qpair failed and we were unable to recover it. 00:29:13.332 [2024-07-26 11:17:32.701487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.332 [2024-07-26 11:17:32.701635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.332 [2024-07-26 11:17:32.701653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.332 [2024-07-26 11:17:32.701660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.332 [2024-07-26 11:17:32.701666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.332 [2024-07-26 11:17:32.701687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.332 qpair failed and we were unable to recover it. 00:29:13.332 [2024-07-26 11:17:32.711504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.332 [2024-07-26 11:17:32.711672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.332 [2024-07-26 11:17:32.711691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.332 [2024-07-26 11:17:32.711698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.332 [2024-07-26 11:17:32.711704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.332 [2024-07-26 11:17:32.711721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.332 qpair failed and we were unable to recover it. 00:29:13.332 [2024-07-26 11:17:32.721512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.332 [2024-07-26 11:17:32.721661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.332 [2024-07-26 11:17:32.721680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.332 [2024-07-26 11:17:32.721687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.332 [2024-07-26 11:17:32.721694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.332 [2024-07-26 11:17:32.721711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.332 qpair failed and we were unable to recover it. 00:29:13.332 [2024-07-26 11:17:32.731559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.332 [2024-07-26 11:17:32.731716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.332 [2024-07-26 11:17:32.731733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.332 [2024-07-26 11:17:32.731740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.332 [2024-07-26 11:17:32.731747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.332 [2024-07-26 11:17:32.731764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.332 qpair failed and we were unable to recover it. 00:29:13.332 [2024-07-26 11:17:32.741603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.332 [2024-07-26 11:17:32.741754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.332 [2024-07-26 11:17:32.741772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.332 [2024-07-26 11:17:32.741779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.332 [2024-07-26 11:17:32.741785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.332 [2024-07-26 11:17:32.741802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.332 qpair failed and we were unable to recover it. 00:29:13.332 [2024-07-26 11:17:32.751545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.332 [2024-07-26 11:17:32.751746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.332 [2024-07-26 11:17:32.751768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.332 [2024-07-26 11:17:32.751776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.332 [2024-07-26 11:17:32.751782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.333 [2024-07-26 11:17:32.751800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.333 qpair failed and we were unable to recover it. 00:29:13.333 [2024-07-26 11:17:32.761669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.333 [2024-07-26 11:17:32.761816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.333 [2024-07-26 11:17:32.761834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.333 [2024-07-26 11:17:32.761841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.333 [2024-07-26 11:17:32.761847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.333 [2024-07-26 11:17:32.761865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.333 qpair failed and we were unable to recover it. 00:29:13.333 [2024-07-26 11:17:32.771612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.333 [2024-07-26 11:17:32.771761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.333 [2024-07-26 11:17:32.771779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.333 [2024-07-26 11:17:32.771787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.333 [2024-07-26 11:17:32.771794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.333 [2024-07-26 11:17:32.771811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.333 qpair failed and we were unable to recover it. 00:29:13.333 [2024-07-26 11:17:32.781732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.333 [2024-07-26 11:17:32.781884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.333 [2024-07-26 11:17:32.781902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.333 [2024-07-26 11:17:32.781909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.333 [2024-07-26 11:17:32.781914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.333 [2024-07-26 11:17:32.781932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.333 qpair failed and we were unable to recover it. 00:29:13.333 [2024-07-26 11:17:32.791741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.333 [2024-07-26 11:17:32.791896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.333 [2024-07-26 11:17:32.791914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.333 [2024-07-26 11:17:32.791921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.333 [2024-07-26 11:17:32.791931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.333 [2024-07-26 11:17:32.791948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.333 qpair failed and we were unable to recover it. 00:29:13.333 [2024-07-26 11:17:32.801785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.333 [2024-07-26 11:17:32.801934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.333 [2024-07-26 11:17:32.801952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.333 [2024-07-26 11:17:32.801959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.333 [2024-07-26 11:17:32.801967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.333 [2024-07-26 11:17:32.801984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.333 qpair failed and we were unable to recover it. 00:29:13.333 [2024-07-26 11:17:32.811732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.333 [2024-07-26 11:17:32.811888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.333 [2024-07-26 11:17:32.811906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.333 [2024-07-26 11:17:32.811913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.333 [2024-07-26 11:17:32.811919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.333 [2024-07-26 11:17:32.811937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.333 qpair failed and we were unable to recover it. 00:29:13.333 [2024-07-26 11:17:32.822074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.333 [2024-07-26 11:17:32.822228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.333 [2024-07-26 11:17:32.822246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.333 [2024-07-26 11:17:32.822253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.333 [2024-07-26 11:17:32.822259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.333 [2024-07-26 11:17:32.822277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.333 qpair failed and we were unable to recover it. 00:29:13.594 [2024-07-26 11:17:32.831874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.594 [2024-07-26 11:17:32.832024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.594 [2024-07-26 11:17:32.832047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.594 [2024-07-26 11:17:32.832055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.594 [2024-07-26 11:17:32.832063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.594 [2024-07-26 11:17:32.832080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.594 qpair failed and we were unable to recover it. 00:29:13.594 [2024-07-26 11:17:32.841927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.594 [2024-07-26 11:17:32.842081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.594 [2024-07-26 11:17:32.842099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.595 [2024-07-26 11:17:32.842106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.595 [2024-07-26 11:17:32.842112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.595 [2024-07-26 11:17:32.842129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-26 11:17:32.851929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.595 [2024-07-26 11:17:32.852090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.595 [2024-07-26 11:17:32.852107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.595 [2024-07-26 11:17:32.852114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.595 [2024-07-26 11:17:32.852120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.595 [2024-07-26 11:17:32.852137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-26 11:17:32.861951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.595 [2024-07-26 11:17:32.862110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.595 [2024-07-26 11:17:32.862127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.595 [2024-07-26 11:17:32.862134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.595 [2024-07-26 11:17:32.862140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.595 [2024-07-26 11:17:32.862157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-26 11:17:32.872015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.595 [2024-07-26 11:17:32.872175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.595 [2024-07-26 11:17:32.872193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.595 [2024-07-26 11:17:32.872200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.595 [2024-07-26 11:17:32.872207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.595 [2024-07-26 11:17:32.872224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-26 11:17:32.881949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.595 [2024-07-26 11:17:32.882108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.595 [2024-07-26 11:17:32.882126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.595 [2024-07-26 11:17:32.882136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.595 [2024-07-26 11:17:32.882143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.595 [2024-07-26 11:17:32.882160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-26 11:17:32.892040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.595 [2024-07-26 11:17:32.892194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.595 [2024-07-26 11:17:32.892211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.595 [2024-07-26 11:17:32.892218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.595 [2024-07-26 11:17:32.892224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.595 [2024-07-26 11:17:32.892242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-26 11:17:32.902056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.595 [2024-07-26 11:17:32.902207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.595 [2024-07-26 11:17:32.902225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.595 [2024-07-26 11:17:32.902232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.595 [2024-07-26 11:17:32.902238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.595 [2024-07-26 11:17:32.902256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-26 11:17:32.912098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.595 [2024-07-26 11:17:32.912249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.595 [2024-07-26 11:17:32.912266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.595 [2024-07-26 11:17:32.912273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.595 [2024-07-26 11:17:32.912280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.595 [2024-07-26 11:17:32.912298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-26 11:17:32.922167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.595 [2024-07-26 11:17:32.922333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.595 [2024-07-26 11:17:32.922351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.595 [2024-07-26 11:17:32.922358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.595 [2024-07-26 11:17:32.922364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.595 [2024-07-26 11:17:32.922381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-26 11:17:32.932164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.595 [2024-07-26 11:17:32.932318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.595 [2024-07-26 11:17:32.932336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.595 [2024-07-26 11:17:32.932343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.595 [2024-07-26 11:17:32.932348] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.595 [2024-07-26 11:17:32.932366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-26 11:17:32.942239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.595 [2024-07-26 11:17:32.942388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.595 [2024-07-26 11:17:32.942406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.595 [2024-07-26 11:17:32.942412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.595 [2024-07-26 11:17:32.942418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.595 [2024-07-26 11:17:32.942435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-26 11:17:32.952207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.595 [2024-07-26 11:17:32.952356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.595 [2024-07-26 11:17:32.952373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.595 [2024-07-26 11:17:32.952380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.595 [2024-07-26 11:17:32.952386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.595 [2024-07-26 11:17:32.952404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-26 11:17:32.962249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.595 [2024-07-26 11:17:32.962403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.595 [2024-07-26 11:17:32.962421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.595 [2024-07-26 11:17:32.962428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.595 [2024-07-26 11:17:32.962434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.595 [2024-07-26 11:17:32.962451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.595 qpair failed and we were unable to recover it. 00:29:13.595 [2024-07-26 11:17:32.972504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.595 [2024-07-26 11:17:32.972653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.595 [2024-07-26 11:17:32.972671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.595 [2024-07-26 11:17:32.972685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.596 [2024-07-26 11:17:32.972691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.596 [2024-07-26 11:17:32.972709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-26 11:17:32.982321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.596 [2024-07-26 11:17:32.982471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.596 [2024-07-26 11:17:32.982489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.596 [2024-07-26 11:17:32.982496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.596 [2024-07-26 11:17:32.982503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.596 [2024-07-26 11:17:32.982521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-26 11:17:32.992312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.596 [2024-07-26 11:17:32.992463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.596 [2024-07-26 11:17:32.992482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.596 [2024-07-26 11:17:32.992489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.596 [2024-07-26 11:17:32.992494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.596 [2024-07-26 11:17:32.992512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-26 11:17:33.002369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.596 [2024-07-26 11:17:33.002513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.596 [2024-07-26 11:17:33.002531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.596 [2024-07-26 11:17:33.002538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.596 [2024-07-26 11:17:33.002546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.596 [2024-07-26 11:17:33.002563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-26 11:17:33.012395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.596 [2024-07-26 11:17:33.012540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.596 [2024-07-26 11:17:33.012557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.596 [2024-07-26 11:17:33.012564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.596 [2024-07-26 11:17:33.012570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.596 [2024-07-26 11:17:33.012588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-26 11:17:33.022655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.596 [2024-07-26 11:17:33.022802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.596 [2024-07-26 11:17:33.022820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.596 [2024-07-26 11:17:33.022827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.596 [2024-07-26 11:17:33.022833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.596 [2024-07-26 11:17:33.022850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-26 11:17:33.032451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.596 [2024-07-26 11:17:33.032599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.596 [2024-07-26 11:17:33.032617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.596 [2024-07-26 11:17:33.032624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.596 [2024-07-26 11:17:33.032631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.596 [2024-07-26 11:17:33.032648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-26 11:17:33.042470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.596 [2024-07-26 11:17:33.042619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.596 [2024-07-26 11:17:33.042637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.596 [2024-07-26 11:17:33.042645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.596 [2024-07-26 11:17:33.042652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.596 [2024-07-26 11:17:33.042669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-26 11:17:33.052503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.596 [2024-07-26 11:17:33.052670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.596 [2024-07-26 11:17:33.052689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.596 [2024-07-26 11:17:33.052695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.596 [2024-07-26 11:17:33.052702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.596 [2024-07-26 11:17:33.052719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-26 11:17:33.062455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.596 [2024-07-26 11:17:33.062614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.596 [2024-07-26 11:17:33.062636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.596 [2024-07-26 11:17:33.062643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.596 [2024-07-26 11:17:33.062651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.596 [2024-07-26 11:17:33.062668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-26 11:17:33.072591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.596 [2024-07-26 11:17:33.072785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.596 [2024-07-26 11:17:33.072803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.596 [2024-07-26 11:17:33.072810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.596 [2024-07-26 11:17:33.072816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.596 [2024-07-26 11:17:33.072834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.596 [2024-07-26 11:17:33.082507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.596 [2024-07-26 11:17:33.082662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.596 [2024-07-26 11:17:33.082680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.596 [2024-07-26 11:17:33.082687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.596 [2024-07-26 11:17:33.082693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.596 [2024-07-26 11:17:33.082711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.596 qpair failed and we were unable to recover it. 00:29:13.858 [2024-07-26 11:17:33.092536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.858 [2024-07-26 11:17:33.092686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.858 [2024-07-26 11:17:33.092704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.858 [2024-07-26 11:17:33.092712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.858 [2024-07-26 11:17:33.092719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.858 [2024-07-26 11:17:33.092735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.858 qpair failed and we were unable to recover it. 00:29:13.858 [2024-07-26 11:17:33.102687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.858 [2024-07-26 11:17:33.102836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.858 [2024-07-26 11:17:33.102853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.858 [2024-07-26 11:17:33.102861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.858 [2024-07-26 11:17:33.102867] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.858 [2024-07-26 11:17:33.102887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.858 qpair failed and we were unable to recover it. 00:29:13.858 [2024-07-26 11:17:33.112686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.858 [2024-07-26 11:17:33.112832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.858 [2024-07-26 11:17:33.112850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.858 [2024-07-26 11:17:33.112857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.858 [2024-07-26 11:17:33.112863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.858 [2024-07-26 11:17:33.112880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.858 qpair failed and we were unable to recover it. 00:29:13.858 [2024-07-26 11:17:33.122708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.858 [2024-07-26 11:17:33.122853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.858 [2024-07-26 11:17:33.122871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.858 [2024-07-26 11:17:33.122879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.858 [2024-07-26 11:17:33.122885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.858 [2024-07-26 11:17:33.122903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.858 qpair failed and we were unable to recover it. 00:29:13.858 [2024-07-26 11:17:33.132743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.858 [2024-07-26 11:17:33.132894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.858 [2024-07-26 11:17:33.132912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.858 [2024-07-26 11:17:33.132920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.858 [2024-07-26 11:17:33.132926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.858 [2024-07-26 11:17:33.132943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.858 qpair failed and we were unable to recover it. 00:29:13.858 [2024-07-26 11:17:33.142778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.858 [2024-07-26 11:17:33.142924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.858 [2024-07-26 11:17:33.142942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.858 [2024-07-26 11:17:33.142949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.858 [2024-07-26 11:17:33.142955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.858 [2024-07-26 11:17:33.142973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.858 qpair failed and we were unable to recover it. 00:29:13.858 [2024-07-26 11:17:33.152792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.858 [2024-07-26 11:17:33.152942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.858 [2024-07-26 11:17:33.152964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.858 [2024-07-26 11:17:33.152971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.858 [2024-07-26 11:17:33.152977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.858 [2024-07-26 11:17:33.152995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.858 qpair failed and we were unable to recover it. 00:29:13.858 [2024-07-26 11:17:33.162830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.858 [2024-07-26 11:17:33.162978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.858 [2024-07-26 11:17:33.162996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.858 [2024-07-26 11:17:33.163004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.858 [2024-07-26 11:17:33.163010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.858 [2024-07-26 11:17:33.163027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.858 qpair failed and we were unable to recover it. 00:29:13.858 [2024-07-26 11:17:33.172852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.858 [2024-07-26 11:17:33.172999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.858 [2024-07-26 11:17:33.173018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.858 [2024-07-26 11:17:33.173025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.858 [2024-07-26 11:17:33.173031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.858 [2024-07-26 11:17:33.173055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.859 qpair failed and we were unable to recover it. 00:29:13.859 [2024-07-26 11:17:33.182898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.859 [2024-07-26 11:17:33.183053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.859 [2024-07-26 11:17:33.183072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.859 [2024-07-26 11:17:33.183079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.859 [2024-07-26 11:17:33.183085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.859 [2024-07-26 11:17:33.183103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.859 qpair failed and we were unable to recover it. 00:29:13.859 [2024-07-26 11:17:33.192907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.859 [2024-07-26 11:17:33.193066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.859 [2024-07-26 11:17:33.193084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.859 [2024-07-26 11:17:33.193091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.859 [2024-07-26 11:17:33.193101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.859 [2024-07-26 11:17:33.193118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.859 qpair failed and we were unable to recover it. 00:29:13.859 [2024-07-26 11:17:33.202946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.859 [2024-07-26 11:17:33.203104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.859 [2024-07-26 11:17:33.203123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.859 [2024-07-26 11:17:33.203130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.859 [2024-07-26 11:17:33.203137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.859 [2024-07-26 11:17:33.203155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.859 qpair failed and we were unable to recover it. 00:29:13.859 [2024-07-26 11:17:33.212951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.859 [2024-07-26 11:17:33.213105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.859 [2024-07-26 11:17:33.213123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.859 [2024-07-26 11:17:33.213130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.859 [2024-07-26 11:17:33.213136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.859 [2024-07-26 11:17:33.213154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.859 qpair failed and we were unable to recover it. 00:29:13.859 [2024-07-26 11:17:33.223040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.859 [2024-07-26 11:17:33.223197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.859 [2024-07-26 11:17:33.223215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.859 [2024-07-26 11:17:33.223222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.859 [2024-07-26 11:17:33.223228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.859 [2024-07-26 11:17:33.223246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.859 qpair failed and we were unable to recover it. 00:29:13.859 [2024-07-26 11:17:33.233035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.859 [2024-07-26 11:17:33.233188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.859 [2024-07-26 11:17:33.233207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.859 [2024-07-26 11:17:33.233214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.859 [2024-07-26 11:17:33.233220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.859 [2024-07-26 11:17:33.233239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.859 qpair failed and we were unable to recover it. 00:29:13.859 [2024-07-26 11:17:33.243073] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.859 [2024-07-26 11:17:33.243232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.859 [2024-07-26 11:17:33.243251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.859 [2024-07-26 11:17:33.243258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.859 [2024-07-26 11:17:33.243264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.859 [2024-07-26 11:17:33.243281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.859 qpair failed and we were unable to recover it. 00:29:13.859 [2024-07-26 11:17:33.253106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.859 [2024-07-26 11:17:33.253260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.859 [2024-07-26 11:17:33.253278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.859 [2024-07-26 11:17:33.253286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.859 [2024-07-26 11:17:33.253292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.859 [2024-07-26 11:17:33.253311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.859 qpair failed and we were unable to recover it. 00:29:13.859 [2024-07-26 11:17:33.263132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.859 [2024-07-26 11:17:33.263282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.859 [2024-07-26 11:17:33.263301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.859 [2024-07-26 11:17:33.263308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.859 [2024-07-26 11:17:33.263316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.859 [2024-07-26 11:17:33.263333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.859 qpair failed and we were unable to recover it. 00:29:13.859 [2024-07-26 11:17:33.273144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.859 [2024-07-26 11:17:33.273304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.859 [2024-07-26 11:17:33.273322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.859 [2024-07-26 11:17:33.273330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.859 [2024-07-26 11:17:33.273337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.859 [2024-07-26 11:17:33.273356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.859 qpair failed and we were unable to recover it. 00:29:13.859 [2024-07-26 11:17:33.283162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.859 [2024-07-26 11:17:33.283319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.859 [2024-07-26 11:17:33.283337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.859 [2024-07-26 11:17:33.283344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.859 [2024-07-26 11:17:33.283353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.859 [2024-07-26 11:17:33.283371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.859 qpair failed and we were unable to recover it. 00:29:13.859 [2024-07-26 11:17:33.293181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.859 [2024-07-26 11:17:33.293340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.859 [2024-07-26 11:17:33.293359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.859 [2024-07-26 11:17:33.293366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.859 [2024-07-26 11:17:33.293373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.859 [2024-07-26 11:17:33.293391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.859 qpair failed and we were unable to recover it. 00:29:13.859 [2024-07-26 11:17:33.303227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.859 [2024-07-26 11:17:33.303376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.859 [2024-07-26 11:17:33.303395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.859 [2024-07-26 11:17:33.303402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.859 [2024-07-26 11:17:33.303409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.859 [2024-07-26 11:17:33.303427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.859 qpair failed and we were unable to recover it. 00:29:13.859 [2024-07-26 11:17:33.313238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.860 [2024-07-26 11:17:33.313387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.860 [2024-07-26 11:17:33.313405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.860 [2024-07-26 11:17:33.313413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.860 [2024-07-26 11:17:33.313420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.860 [2024-07-26 11:17:33.313439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.860 qpair failed and we were unable to recover it. 00:29:13.860 [2024-07-26 11:17:33.323274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.860 [2024-07-26 11:17:33.323465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.860 [2024-07-26 11:17:33.323483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.860 [2024-07-26 11:17:33.323490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.860 [2024-07-26 11:17:33.323497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.860 [2024-07-26 11:17:33.323514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.860 qpair failed and we were unable to recover it. 00:29:13.860 [2024-07-26 11:17:33.333240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.860 [2024-07-26 11:17:33.333392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.860 [2024-07-26 11:17:33.333411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.860 [2024-07-26 11:17:33.333418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.860 [2024-07-26 11:17:33.333425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.860 [2024-07-26 11:17:33.333444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.860 qpair failed and we were unable to recover it. 00:29:13.860 [2024-07-26 11:17:33.343340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.860 [2024-07-26 11:17:33.343494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.860 [2024-07-26 11:17:33.343512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.860 [2024-07-26 11:17:33.343520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.860 [2024-07-26 11:17:33.343527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:13.860 [2024-07-26 11:17:33.343545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:13.860 qpair failed and we were unable to recover it. 00:29:14.120 [2024-07-26 11:17:33.353357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.120 [2024-07-26 11:17:33.353546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.120 [2024-07-26 11:17:33.353564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.120 [2024-07-26 11:17:33.353572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.120 [2024-07-26 11:17:33.353579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.120 [2024-07-26 11:17:33.353597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-07-26 11:17:33.363328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.120 [2024-07-26 11:17:33.363478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.120 [2024-07-26 11:17:33.363497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.120 [2024-07-26 11:17:33.363504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.120 [2024-07-26 11:17:33.363512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.120 [2024-07-26 11:17:33.363529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.120 qpair failed and we were unable to recover it. 00:29:14.120 [2024-07-26 11:17:33.373355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.121 [2024-07-26 11:17:33.373507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.121 [2024-07-26 11:17:33.373525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.121 [2024-07-26 11:17:33.373535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.121 [2024-07-26 11:17:33.373542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.121 [2024-07-26 11:17:33.373560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-07-26 11:17:33.383467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.121 [2024-07-26 11:17:33.383619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.121 [2024-07-26 11:17:33.383638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.121 [2024-07-26 11:17:33.383645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.121 [2024-07-26 11:17:33.383652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.121 [2024-07-26 11:17:33.383668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-07-26 11:17:33.393519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.121 [2024-07-26 11:17:33.393671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.121 [2024-07-26 11:17:33.393689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.121 [2024-07-26 11:17:33.393697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.121 [2024-07-26 11:17:33.393704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.121 [2024-07-26 11:17:33.393722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-07-26 11:17:33.403520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.121 [2024-07-26 11:17:33.403664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.121 [2024-07-26 11:17:33.403682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.121 [2024-07-26 11:17:33.403689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.121 [2024-07-26 11:17:33.403696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.121 [2024-07-26 11:17:33.403714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-07-26 11:17:33.413516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.121 [2024-07-26 11:17:33.413901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.121 [2024-07-26 11:17:33.413918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.121 [2024-07-26 11:17:33.413925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.121 [2024-07-26 11:17:33.413931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.121 [2024-07-26 11:17:33.413947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-07-26 11:17:33.423576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.121 [2024-07-26 11:17:33.423724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.121 [2024-07-26 11:17:33.423742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.121 [2024-07-26 11:17:33.423749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.121 [2024-07-26 11:17:33.423756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.121 [2024-07-26 11:17:33.423773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-07-26 11:17:33.433594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.121 [2024-07-26 11:17:33.433761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.121 [2024-07-26 11:17:33.433779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.121 [2024-07-26 11:17:33.433786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.121 [2024-07-26 11:17:33.433793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.121 [2024-07-26 11:17:33.433810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-07-26 11:17:33.443614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.121 [2024-07-26 11:17:33.443761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.121 [2024-07-26 11:17:33.443779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.121 [2024-07-26 11:17:33.443786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.121 [2024-07-26 11:17:33.443793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.121 [2024-07-26 11:17:33.443811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-07-26 11:17:33.453650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.121 [2024-07-26 11:17:33.453798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.121 [2024-07-26 11:17:33.453817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.121 [2024-07-26 11:17:33.453824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.121 [2024-07-26 11:17:33.453830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.121 [2024-07-26 11:17:33.453848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-07-26 11:17:33.463690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.121 [2024-07-26 11:17:33.463839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.121 [2024-07-26 11:17:33.463860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.121 [2024-07-26 11:17:33.463867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.121 [2024-07-26 11:17:33.463873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.121 [2024-07-26 11:17:33.463890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-07-26 11:17:33.473928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.121 [2024-07-26 11:17:33.474102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.121 [2024-07-26 11:17:33.474120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.121 [2024-07-26 11:17:33.474127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.121 [2024-07-26 11:17:33.474135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.121 [2024-07-26 11:17:33.474152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-07-26 11:17:33.483759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.121 [2024-07-26 11:17:33.483923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.121 [2024-07-26 11:17:33.483941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.121 [2024-07-26 11:17:33.483948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.121 [2024-07-26 11:17:33.483955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.121 [2024-07-26 11:17:33.483972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-07-26 11:17:33.493756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.121 [2024-07-26 11:17:33.493905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.121 [2024-07-26 11:17:33.493922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.121 [2024-07-26 11:17:33.493930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.121 [2024-07-26 11:17:33.493936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.121 [2024-07-26 11:17:33.493953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.121 qpair failed and we were unable to recover it. 00:29:14.121 [2024-07-26 11:17:33.503809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.121 [2024-07-26 11:17:33.503957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.122 [2024-07-26 11:17:33.503975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.122 [2024-07-26 11:17:33.503982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.122 [2024-07-26 11:17:33.503988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.122 [2024-07-26 11:17:33.504008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-07-26 11:17:33.513791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.122 [2024-07-26 11:17:33.513941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.122 [2024-07-26 11:17:33.513959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.122 [2024-07-26 11:17:33.513966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.122 [2024-07-26 11:17:33.513972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.122 [2024-07-26 11:17:33.513990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-07-26 11:17:33.523843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.122 [2024-07-26 11:17:33.523997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.122 [2024-07-26 11:17:33.524015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.122 [2024-07-26 11:17:33.524022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.122 [2024-07-26 11:17:33.524028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.122 [2024-07-26 11:17:33.524051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-07-26 11:17:33.533873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.122 [2024-07-26 11:17:33.534021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.122 [2024-07-26 11:17:33.534039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.122 [2024-07-26 11:17:33.534051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.122 [2024-07-26 11:17:33.534058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.122 [2024-07-26 11:17:33.534076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-07-26 11:17:33.543911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.122 [2024-07-26 11:17:33.544071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.122 [2024-07-26 11:17:33.544089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.122 [2024-07-26 11:17:33.544096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.122 [2024-07-26 11:17:33.544102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.122 [2024-07-26 11:17:33.544120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-07-26 11:17:33.553871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.122 [2024-07-26 11:17:33.554023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.122 [2024-07-26 11:17:33.554048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.122 [2024-07-26 11:17:33.554056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.122 [2024-07-26 11:17:33.554061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.122 [2024-07-26 11:17:33.554079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-07-26 11:17:33.563968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.122 [2024-07-26 11:17:33.564121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.122 [2024-07-26 11:17:33.564139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.122 [2024-07-26 11:17:33.564146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.122 [2024-07-26 11:17:33.564153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.122 [2024-07-26 11:17:33.564171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-07-26 11:17:33.573995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.122 [2024-07-26 11:17:33.574149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.122 [2024-07-26 11:17:33.574168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.122 [2024-07-26 11:17:33.574175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.122 [2024-07-26 11:17:33.574181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.122 [2024-07-26 11:17:33.574198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-07-26 11:17:33.583952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.122 [2024-07-26 11:17:33.584107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.122 [2024-07-26 11:17:33.584125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.122 [2024-07-26 11:17:33.584132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.122 [2024-07-26 11:17:33.584139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.122 [2024-07-26 11:17:33.584157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-07-26 11:17:33.594058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.122 [2024-07-26 11:17:33.594209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.122 [2024-07-26 11:17:33.594227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.122 [2024-07-26 11:17:33.594234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.122 [2024-07-26 11:17:33.594244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.122 [2024-07-26 11:17:33.594261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-07-26 11:17:33.604078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.122 [2024-07-26 11:17:33.604226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.122 [2024-07-26 11:17:33.604244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.122 [2024-07-26 11:17:33.604251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.122 [2024-07-26 11:17:33.604257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.122 [2024-07-26 11:17:33.604275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.122 [2024-07-26 11:17:33.614072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.122 [2024-07-26 11:17:33.614224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.122 [2024-07-26 11:17:33.614242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.122 [2024-07-26 11:17:33.614249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.122 [2024-07-26 11:17:33.614255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.122 [2024-07-26 11:17:33.614273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.122 qpair failed and we were unable to recover it. 00:29:14.383 [2024-07-26 11:17:33.624163] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.383 [2024-07-26 11:17:33.624313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.383 [2024-07-26 11:17:33.624331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.383 [2024-07-26 11:17:33.624338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.383 [2024-07-26 11:17:33.624344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.383 [2024-07-26 11:17:33.624362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.383 qpair failed and we were unable to recover it. 00:29:14.383 [2024-07-26 11:17:33.634170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.383 [2024-07-26 11:17:33.634319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.383 [2024-07-26 11:17:33.634336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.383 [2024-07-26 11:17:33.634344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.383 [2024-07-26 11:17:33.634350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.383 [2024-07-26 11:17:33.634368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.383 qpair failed and we were unable to recover it. 00:29:14.383 [2024-07-26 11:17:33.644209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.383 [2024-07-26 11:17:33.644363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.383 [2024-07-26 11:17:33.644382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.383 [2024-07-26 11:17:33.644389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.383 [2024-07-26 11:17:33.644396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.383 [2024-07-26 11:17:33.644413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.383 qpair failed and we were unable to recover it. 00:29:14.383 [2024-07-26 11:17:33.654254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.383 [2024-07-26 11:17:33.654425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.383 [2024-07-26 11:17:33.654443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.383 [2024-07-26 11:17:33.654450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.383 [2024-07-26 11:17:33.654456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.383 [2024-07-26 11:17:33.654473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.383 qpair failed and we were unable to recover it. 00:29:14.383 [2024-07-26 11:17:33.664260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.383 [2024-07-26 11:17:33.664407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.383 [2024-07-26 11:17:33.664425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.383 [2024-07-26 11:17:33.664433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.383 [2024-07-26 11:17:33.664439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.383 [2024-07-26 11:17:33.664456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.383 qpair failed and we were unable to recover it. 00:29:14.383 [2024-07-26 11:17:33.674288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.383 [2024-07-26 11:17:33.674440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.383 [2024-07-26 11:17:33.674458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.383 [2024-07-26 11:17:33.674465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.383 [2024-07-26 11:17:33.674471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.383 [2024-07-26 11:17:33.674488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.383 qpair failed and we were unable to recover it. 00:29:14.384 [2024-07-26 11:17:33.684304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.384 [2024-07-26 11:17:33.684450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.384 [2024-07-26 11:17:33.684468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.384 [2024-07-26 11:17:33.684474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.384 [2024-07-26 11:17:33.684484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.384 [2024-07-26 11:17:33.684500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.384 qpair failed and we were unable to recover it. 00:29:14.384 [2024-07-26 11:17:33.694337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.384 [2024-07-26 11:17:33.694487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.384 [2024-07-26 11:17:33.694504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.384 [2024-07-26 11:17:33.694512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.384 [2024-07-26 11:17:33.694518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.384 [2024-07-26 11:17:33.694535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.384 qpair failed and we were unable to recover it. 00:29:14.384 [2024-07-26 11:17:33.704301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.384 [2024-07-26 11:17:33.704455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.384 [2024-07-26 11:17:33.704473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.384 [2024-07-26 11:17:33.704480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.384 [2024-07-26 11:17:33.704486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.384 [2024-07-26 11:17:33.704503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.384 qpair failed and we were unable to recover it. 00:29:14.384 [2024-07-26 11:17:33.714523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.384 [2024-07-26 11:17:33.714726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.384 [2024-07-26 11:17:33.714743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.384 [2024-07-26 11:17:33.714750] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.384 [2024-07-26 11:17:33.714756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.384 [2024-07-26 11:17:33.714774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.384 qpair failed and we were unable to recover it. 00:29:14.384 [2024-07-26 11:17:33.724457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.384 [2024-07-26 11:17:33.724606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.384 [2024-07-26 11:17:33.724624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.384 [2024-07-26 11:17:33.724631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.384 [2024-07-26 11:17:33.724637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.384 [2024-07-26 11:17:33.724654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.384 qpair failed and we were unable to recover it. 00:29:14.384 [2024-07-26 11:17:33.734451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.384 [2024-07-26 11:17:33.734602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.384 [2024-07-26 11:17:33.734620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.384 [2024-07-26 11:17:33.734627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.384 [2024-07-26 11:17:33.734633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.384 [2024-07-26 11:17:33.734650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.384 qpair failed and we were unable to recover it. 00:29:14.384 [2024-07-26 11:17:33.744515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.384 [2024-07-26 11:17:33.744672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.384 [2024-07-26 11:17:33.744690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.384 [2024-07-26 11:17:33.744697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.384 [2024-07-26 11:17:33.744704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.384 [2024-07-26 11:17:33.744720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.384 qpair failed and we were unable to recover it. 00:29:14.384 [2024-07-26 11:17:33.754464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.384 [2024-07-26 11:17:33.754618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.384 [2024-07-26 11:17:33.754637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.384 [2024-07-26 11:17:33.754644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.384 [2024-07-26 11:17:33.754650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.384 [2024-07-26 11:17:33.754668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.384 qpair failed and we were unable to recover it. 00:29:14.384 [2024-07-26 11:17:33.764514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.384 [2024-07-26 11:17:33.764663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.384 [2024-07-26 11:17:33.764681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.384 [2024-07-26 11:17:33.764688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.384 [2024-07-26 11:17:33.764694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.384 [2024-07-26 11:17:33.764712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.384 qpair failed and we were unable to recover it. 00:29:14.384 [2024-07-26 11:17:33.774487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.384 [2024-07-26 11:17:33.774637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.384 [2024-07-26 11:17:33.774655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.384 [2024-07-26 11:17:33.774665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.384 [2024-07-26 11:17:33.774672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.384 [2024-07-26 11:17:33.774689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.384 qpair failed and we were unable to recover it. 00:29:14.384 [2024-07-26 11:17:33.784515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.384 [2024-07-26 11:17:33.784666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.384 [2024-07-26 11:17:33.784684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.384 [2024-07-26 11:17:33.784692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.384 [2024-07-26 11:17:33.784698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.384 [2024-07-26 11:17:33.784715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.384 qpair failed and we were unable to recover it. 00:29:14.384 [2024-07-26 11:17:33.794615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.384 [2024-07-26 11:17:33.794767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.384 [2024-07-26 11:17:33.794785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.384 [2024-07-26 11:17:33.794792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.384 [2024-07-26 11:17:33.794798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.384 [2024-07-26 11:17:33.794816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.384 qpair failed and we were unable to recover it. 00:29:14.384 [2024-07-26 11:17:33.804652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.384 [2024-07-26 11:17:33.804806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.384 [2024-07-26 11:17:33.804824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.384 [2024-07-26 11:17:33.804831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.384 [2024-07-26 11:17:33.804837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.384 [2024-07-26 11:17:33.804856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.384 qpair failed and we were unable to recover it. 00:29:14.384 [2024-07-26 11:17:33.814680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.385 [2024-07-26 11:17:33.814829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.385 [2024-07-26 11:17:33.814847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.385 [2024-07-26 11:17:33.814854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.385 [2024-07-26 11:17:33.814860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.385 [2024-07-26 11:17:33.814878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.385 qpair failed and we were unable to recover it. 00:29:14.385 [2024-07-26 11:17:33.824645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.385 [2024-07-26 11:17:33.824797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.385 [2024-07-26 11:17:33.824816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.385 [2024-07-26 11:17:33.824823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.385 [2024-07-26 11:17:33.824830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.385 [2024-07-26 11:17:33.824848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.385 qpair failed and we were unable to recover it. 00:29:14.385 [2024-07-26 11:17:33.834654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.385 [2024-07-26 11:17:33.834806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.385 [2024-07-26 11:17:33.834823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.385 [2024-07-26 11:17:33.834830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.385 [2024-07-26 11:17:33.834836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.385 [2024-07-26 11:17:33.834854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.385 qpair failed and we were unable to recover it. 00:29:14.385 [2024-07-26 11:17:33.844801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.385 [2024-07-26 11:17:33.844953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.385 [2024-07-26 11:17:33.844972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.385 [2024-07-26 11:17:33.844979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.385 [2024-07-26 11:17:33.844985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.385 [2024-07-26 11:17:33.845002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.385 qpair failed and we were unable to recover it. 00:29:14.385 [2024-07-26 11:17:33.854719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.385 [2024-07-26 11:17:33.854907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.385 [2024-07-26 11:17:33.854925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.385 [2024-07-26 11:17:33.854932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.385 [2024-07-26 11:17:33.854938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.385 [2024-07-26 11:17:33.854955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.385 qpair failed and we were unable to recover it. 00:29:14.385 [2024-07-26 11:17:33.864865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.385 [2024-07-26 11:17:33.865029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.385 [2024-07-26 11:17:33.865058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.385 [2024-07-26 11:17:33.865066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.385 [2024-07-26 11:17:33.865072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.385 [2024-07-26 11:17:33.865089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.385 qpair failed and we were unable to recover it. 00:29:14.385 [2024-07-26 11:17:33.874844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.385 [2024-07-26 11:17:33.874999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.385 [2024-07-26 11:17:33.875017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.385 [2024-07-26 11:17:33.875024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.385 [2024-07-26 11:17:33.875030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.385 [2024-07-26 11:17:33.875054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.385 qpair failed and we were unable to recover it. 00:29:14.647 [2024-07-26 11:17:33.884808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.647 [2024-07-26 11:17:33.884968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.647 [2024-07-26 11:17:33.884987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.647 [2024-07-26 11:17:33.884994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.647 [2024-07-26 11:17:33.885002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.647 [2024-07-26 11:17:33.885020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.647 qpair failed and we were unable to recover it. 00:29:14.647 [2024-07-26 11:17:33.894911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.647 [2024-07-26 11:17:33.895063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.647 [2024-07-26 11:17:33.895081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.647 [2024-07-26 11:17:33.895088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.647 [2024-07-26 11:17:33.895094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.647 [2024-07-26 11:17:33.895112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.647 qpair failed and we were unable to recover it. 00:29:14.647 [2024-07-26 11:17:33.904917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.647 [2024-07-26 11:17:33.905076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.647 [2024-07-26 11:17:33.905094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.647 [2024-07-26 11:17:33.905101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.647 [2024-07-26 11:17:33.905107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.647 [2024-07-26 11:17:33.905127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.647 qpair failed and we were unable to recover it. 00:29:14.647 [2024-07-26 11:17:33.914931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.647 [2024-07-26 11:17:33.915089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.647 [2024-07-26 11:17:33.915106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.647 [2024-07-26 11:17:33.915113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.647 [2024-07-26 11:17:33.915119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.647 [2024-07-26 11:17:33.915137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.647 qpair failed and we were unable to recover it. 00:29:14.647 [2024-07-26 11:17:33.924948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.647 [2024-07-26 11:17:33.925100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.647 [2024-07-26 11:17:33.925118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.647 [2024-07-26 11:17:33.925125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.647 [2024-07-26 11:17:33.925131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.647 [2024-07-26 11:17:33.925150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.647 qpair failed and we were unable to recover it. 00:29:14.647 [2024-07-26 11:17:33.935051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.647 [2024-07-26 11:17:33.935205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.647 [2024-07-26 11:17:33.935223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.647 [2024-07-26 11:17:33.935230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.647 [2024-07-26 11:17:33.935238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.647 [2024-07-26 11:17:33.935256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.647 qpair failed and we were unable to recover it. 00:29:14.647 [2024-07-26 11:17:33.945051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.647 [2024-07-26 11:17:33.945213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.647 [2024-07-26 11:17:33.945231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.647 [2024-07-26 11:17:33.945238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.647 [2024-07-26 11:17:33.945244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.647 [2024-07-26 11:17:33.945262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.647 qpair failed and we were unable to recover it. 00:29:14.647 [2024-07-26 11:17:33.955067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.647 [2024-07-26 11:17:33.955222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.647 [2024-07-26 11:17:33.955243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.647 [2024-07-26 11:17:33.955250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.647 [2024-07-26 11:17:33.955257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.647 [2024-07-26 11:17:33.955274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.647 qpair failed and we were unable to recover it. 00:29:14.647 [2024-07-26 11:17:33.965136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.647 [2024-07-26 11:17:33.965308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.648 [2024-07-26 11:17:33.965326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.648 [2024-07-26 11:17:33.965333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.648 [2024-07-26 11:17:33.965339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.648 [2024-07-26 11:17:33.965356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.648 qpair failed and we were unable to recover it. 00:29:14.648 [2024-07-26 11:17:33.975222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.648 [2024-07-26 11:17:33.975395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.648 [2024-07-26 11:17:33.975413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.648 [2024-07-26 11:17:33.975420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.648 [2024-07-26 11:17:33.975426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.648 [2024-07-26 11:17:33.975444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.648 qpair failed and we were unable to recover it. 00:29:14.648 [2024-07-26 11:17:33.985186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.648 [2024-07-26 11:17:33.985564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.648 [2024-07-26 11:17:33.985583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.648 [2024-07-26 11:17:33.985589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.648 [2024-07-26 11:17:33.985595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.648 [2024-07-26 11:17:33.985611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.648 qpair failed and we were unable to recover it. 00:29:14.648 [2024-07-26 11:17:33.995138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.648 [2024-07-26 11:17:33.995289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.648 [2024-07-26 11:17:33.995307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.648 [2024-07-26 11:17:33.995314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.648 [2024-07-26 11:17:33.995319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.648 [2024-07-26 11:17:33.995340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.648 qpair failed and we were unable to recover it. 00:29:14.648 [2024-07-26 11:17:34.005232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.648 [2024-07-26 11:17:34.005387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.648 [2024-07-26 11:17:34.005406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.648 [2024-07-26 11:17:34.005412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.648 [2024-07-26 11:17:34.005419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.648 [2024-07-26 11:17:34.005436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.648 qpair failed and we were unable to recover it. 00:29:14.648 [2024-07-26 11:17:34.015178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.648 [2024-07-26 11:17:34.015330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.648 [2024-07-26 11:17:34.015347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.648 [2024-07-26 11:17:34.015354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.648 [2024-07-26 11:17:34.015360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.648 [2024-07-26 11:17:34.015378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.648 qpair failed and we were unable to recover it. 00:29:14.648 [2024-07-26 11:17:34.025313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.648 [2024-07-26 11:17:34.025460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.648 [2024-07-26 11:17:34.025479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.648 [2024-07-26 11:17:34.025485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.648 [2024-07-26 11:17:34.025492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.648 [2024-07-26 11:17:34.025509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.648 qpair failed and we were unable to recover it. 00:29:14.648 [2024-07-26 11:17:34.035247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.648 [2024-07-26 11:17:34.035398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.648 [2024-07-26 11:17:34.035416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.648 [2024-07-26 11:17:34.035423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.648 [2024-07-26 11:17:34.035429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.648 [2024-07-26 11:17:34.035446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.648 qpair failed and we were unable to recover it. 00:29:14.648 [2024-07-26 11:17:34.045346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.648 [2024-07-26 11:17:34.045500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.648 [2024-07-26 11:17:34.045518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.648 [2024-07-26 11:17:34.045525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.648 [2024-07-26 11:17:34.045531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.648 [2024-07-26 11:17:34.045548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.648 qpair failed and we were unable to recover it. 00:29:14.648 [2024-07-26 11:17:34.055304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.648 [2024-07-26 11:17:34.055450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.648 [2024-07-26 11:17:34.055468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.648 [2024-07-26 11:17:34.055476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.648 [2024-07-26 11:17:34.055481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.648 [2024-07-26 11:17:34.055499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.648 qpair failed and we were unable to recover it. 00:29:14.648 [2024-07-26 11:17:34.065401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.648 [2024-07-26 11:17:34.065551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.648 [2024-07-26 11:17:34.065569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.648 [2024-07-26 11:17:34.065576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.648 [2024-07-26 11:17:34.065582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.648 [2024-07-26 11:17:34.065599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.648 qpair failed and we were unable to recover it. 00:29:14.648 [2024-07-26 11:17:34.075393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.648 [2024-07-26 11:17:34.075551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.648 [2024-07-26 11:17:34.075569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.648 [2024-07-26 11:17:34.075577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.648 [2024-07-26 11:17:34.075583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.648 [2024-07-26 11:17:34.075601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.648 qpair failed and we were unable to recover it. 00:29:14.648 [2024-07-26 11:17:34.085421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.648 [2024-07-26 11:17:34.085601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.648 [2024-07-26 11:17:34.085619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.648 [2024-07-26 11:17:34.085626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.648 [2024-07-26 11:17:34.085635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.648 [2024-07-26 11:17:34.085652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.648 qpair failed and we were unable to recover it. 00:29:14.648 [2024-07-26 11:17:34.095478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.648 [2024-07-26 11:17:34.095629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.648 [2024-07-26 11:17:34.095647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.648 [2024-07-26 11:17:34.095654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.649 [2024-07-26 11:17:34.095660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.649 [2024-07-26 11:17:34.095679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.649 qpair failed and we were unable to recover it. 00:29:14.649 [2024-07-26 11:17:34.105503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.649 [2024-07-26 11:17:34.105670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.649 [2024-07-26 11:17:34.105688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.649 [2024-07-26 11:17:34.105695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.649 [2024-07-26 11:17:34.105701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.649 [2024-07-26 11:17:34.105718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.649 qpair failed and we were unable to recover it. 00:29:14.649 [2024-07-26 11:17:34.115523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.649 [2024-07-26 11:17:34.115675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.649 [2024-07-26 11:17:34.115694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.649 [2024-07-26 11:17:34.115700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.649 [2024-07-26 11:17:34.115706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.649 [2024-07-26 11:17:34.115724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.649 qpair failed and we were unable to recover it. 00:29:14.649 [2024-07-26 11:17:34.125521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.649 [2024-07-26 11:17:34.125673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.649 [2024-07-26 11:17:34.125691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.649 [2024-07-26 11:17:34.125698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.649 [2024-07-26 11:17:34.125704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.649 [2024-07-26 11:17:34.125721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.649 qpair failed and we were unable to recover it. 00:29:14.649 [2024-07-26 11:17:34.135607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.649 [2024-07-26 11:17:34.135760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.649 [2024-07-26 11:17:34.135779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.649 [2024-07-26 11:17:34.135786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.649 [2024-07-26 11:17:34.135793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.649 [2024-07-26 11:17:34.135810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.649 qpair failed and we were unable to recover it. 00:29:14.911 [2024-07-26 11:17:34.145534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.911 [2024-07-26 11:17:34.145685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.911 [2024-07-26 11:17:34.145704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.911 [2024-07-26 11:17:34.145711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.911 [2024-07-26 11:17:34.145717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.911 [2024-07-26 11:17:34.145735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.911 qpair failed and we were unable to recover it. 00:29:14.911 [2024-07-26 11:17:34.155606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.911 [2024-07-26 11:17:34.155757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.911 [2024-07-26 11:17:34.155775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.911 [2024-07-26 11:17:34.155783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.911 [2024-07-26 11:17:34.155791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.911 [2024-07-26 11:17:34.155808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.911 qpair failed and we were unable to recover it. 00:29:14.911 [2024-07-26 11:17:34.165646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.911 [2024-07-26 11:17:34.165805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.911 [2024-07-26 11:17:34.165822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.911 [2024-07-26 11:17:34.165830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.911 [2024-07-26 11:17:34.165836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.911 [2024-07-26 11:17:34.165853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.911 qpair failed and we were unable to recover it. 00:29:14.911 [2024-07-26 11:17:34.175649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.911 [2024-07-26 11:17:34.175805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.911 [2024-07-26 11:17:34.175823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.911 [2024-07-26 11:17:34.175834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.911 [2024-07-26 11:17:34.175841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.911 [2024-07-26 11:17:34.175858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.911 qpair failed and we were unable to recover it. 00:29:14.911 [2024-07-26 11:17:34.185702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.911 [2024-07-26 11:17:34.185889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.911 [2024-07-26 11:17:34.185907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.911 [2024-07-26 11:17:34.185915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.911 [2024-07-26 11:17:34.185921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.911 [2024-07-26 11:17:34.185938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.911 qpair failed and we were unable to recover it. 00:29:14.911 [2024-07-26 11:17:34.195757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.912 [2024-07-26 11:17:34.195911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.912 [2024-07-26 11:17:34.195929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.912 [2024-07-26 11:17:34.195936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.912 [2024-07-26 11:17:34.195942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.912 [2024-07-26 11:17:34.195959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.912 qpair failed and we were unable to recover it. 00:29:14.912 [2024-07-26 11:17:34.205768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.912 [2024-07-26 11:17:34.205917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.912 [2024-07-26 11:17:34.205935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.912 [2024-07-26 11:17:34.205942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.912 [2024-07-26 11:17:34.205948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.912 [2024-07-26 11:17:34.205965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.912 qpair failed and we were unable to recover it. 00:29:14.912 [2024-07-26 11:17:34.215750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.912 [2024-07-26 11:17:34.215901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.912 [2024-07-26 11:17:34.215918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.912 [2024-07-26 11:17:34.215925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.912 [2024-07-26 11:17:34.215931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.912 [2024-07-26 11:17:34.215949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.912 qpair failed and we were unable to recover it. 00:29:14.912 [2024-07-26 11:17:34.225751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.912 [2024-07-26 11:17:34.225903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.912 [2024-07-26 11:17:34.225921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.912 [2024-07-26 11:17:34.225927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.912 [2024-07-26 11:17:34.225933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.912 [2024-07-26 11:17:34.225951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.912 qpair failed and we were unable to recover it. 00:29:14.912 [2024-07-26 11:17:34.235854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.912 [2024-07-26 11:17:34.236008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.912 [2024-07-26 11:17:34.236026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.912 [2024-07-26 11:17:34.236033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.912 [2024-07-26 11:17:34.236040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.912 [2024-07-26 11:17:34.236064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.912 qpair failed and we were unable to recover it. 00:29:14.912 [2024-07-26 11:17:34.245801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.912 [2024-07-26 11:17:34.245948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.912 [2024-07-26 11:17:34.245965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.912 [2024-07-26 11:17:34.245972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.912 [2024-07-26 11:17:34.245978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.912 [2024-07-26 11:17:34.245995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.912 qpair failed and we were unable to recover it. 00:29:14.912 [2024-07-26 11:17:34.255910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.912 [2024-07-26 11:17:34.256064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.912 [2024-07-26 11:17:34.256082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.912 [2024-07-26 11:17:34.256088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.912 [2024-07-26 11:17:34.256094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.912 [2024-07-26 11:17:34.256112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.912 qpair failed and we were unable to recover it. 00:29:14.912 [2024-07-26 11:17:34.265946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.912 [2024-07-26 11:17:34.266105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.912 [2024-07-26 11:17:34.266123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.912 [2024-07-26 11:17:34.266133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.912 [2024-07-26 11:17:34.266140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.912 [2024-07-26 11:17:34.266157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.912 qpair failed and we were unable to recover it. 00:29:14.912 [2024-07-26 11:17:34.276010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.912 [2024-07-26 11:17:34.276164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.912 [2024-07-26 11:17:34.276181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.912 [2024-07-26 11:17:34.276188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.912 [2024-07-26 11:17:34.276194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.912 [2024-07-26 11:17:34.276212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.912 qpair failed and we were unable to recover it. 00:29:14.912 [2024-07-26 11:17:34.285998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.912 [2024-07-26 11:17:34.286154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.912 [2024-07-26 11:17:34.286173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.912 [2024-07-26 11:17:34.286180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.912 [2024-07-26 11:17:34.286186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.912 [2024-07-26 11:17:34.286203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.912 qpair failed and we were unable to recover it. 00:29:14.912 [2024-07-26 11:17:34.296041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.912 [2024-07-26 11:17:34.296191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.912 [2024-07-26 11:17:34.296210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.912 [2024-07-26 11:17:34.296218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.912 [2024-07-26 11:17:34.296223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.912 [2024-07-26 11:17:34.296241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.912 qpair failed and we were unable to recover it. 00:29:14.912 [2024-07-26 11:17:34.306047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.912 [2024-07-26 11:17:34.306198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.912 [2024-07-26 11:17:34.306216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.912 [2024-07-26 11:17:34.306223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.912 [2024-07-26 11:17:34.306229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.912 [2024-07-26 11:17:34.306246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.912 qpair failed and we were unable to recover it. 00:29:14.912 [2024-07-26 11:17:34.316104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.912 [2024-07-26 11:17:34.316257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.912 [2024-07-26 11:17:34.316275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.912 [2024-07-26 11:17:34.316282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.912 [2024-07-26 11:17:34.316288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.912 [2024-07-26 11:17:34.316306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.912 qpair failed and we were unable to recover it. 00:29:14.912 [2024-07-26 11:17:34.326129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.912 [2024-07-26 11:17:34.326280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.913 [2024-07-26 11:17:34.326298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.913 [2024-07-26 11:17:34.326305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.913 [2024-07-26 11:17:34.326311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.913 [2024-07-26 11:17:34.326328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.913 qpair failed and we were unable to recover it. 00:29:14.913 [2024-07-26 11:17:34.336164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.913 [2024-07-26 11:17:34.336312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.913 [2024-07-26 11:17:34.336329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.913 [2024-07-26 11:17:34.336336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.913 [2024-07-26 11:17:34.336342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.913 [2024-07-26 11:17:34.336359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.913 qpair failed and we were unable to recover it. 00:29:14.913 [2024-07-26 11:17:34.346198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.913 [2024-07-26 11:17:34.346347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.913 [2024-07-26 11:17:34.346365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.913 [2024-07-26 11:17:34.346373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.913 [2024-07-26 11:17:34.346378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.913 [2024-07-26 11:17:34.346395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.913 qpair failed and we were unable to recover it. 00:29:14.913 [2024-07-26 11:17:34.356215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.913 [2024-07-26 11:17:34.356369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.913 [2024-07-26 11:17:34.356390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.913 [2024-07-26 11:17:34.356397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.913 [2024-07-26 11:17:34.356403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.913 [2024-07-26 11:17:34.356421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.913 qpair failed and we were unable to recover it. 00:29:14.913 [2024-07-26 11:17:34.366176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.913 [2024-07-26 11:17:34.366325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.913 [2024-07-26 11:17:34.366343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.913 [2024-07-26 11:17:34.366350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.913 [2024-07-26 11:17:34.366357] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.913 [2024-07-26 11:17:34.366373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.913 qpair failed and we were unable to recover it. 00:29:14.913 [2024-07-26 11:17:34.376276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.913 [2024-07-26 11:17:34.376426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.913 [2024-07-26 11:17:34.376445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.913 [2024-07-26 11:17:34.376451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.913 [2024-07-26 11:17:34.376458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.913 [2024-07-26 11:17:34.376475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.913 qpair failed and we were unable to recover it. 00:29:14.913 [2024-07-26 11:17:34.386254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.913 [2024-07-26 11:17:34.386438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.913 [2024-07-26 11:17:34.386455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.913 [2024-07-26 11:17:34.386462] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.913 [2024-07-26 11:17:34.386469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.913 [2024-07-26 11:17:34.386486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.913 qpair failed and we were unable to recover it. 00:29:14.913 [2024-07-26 11:17:34.396333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.913 [2024-07-26 11:17:34.396482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.913 [2024-07-26 11:17:34.396500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.913 [2024-07-26 11:17:34.396507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.913 [2024-07-26 11:17:34.396514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:14.913 [2024-07-26 11:17:34.396537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:14.913 qpair failed and we were unable to recover it. 00:29:15.175 [2024-07-26 11:17:34.406382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.175 [2024-07-26 11:17:34.406530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.175 [2024-07-26 11:17:34.406548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.175 [2024-07-26 11:17:34.406555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.175 [2024-07-26 11:17:34.406561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.175 [2024-07-26 11:17:34.406578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.175 qpair failed and we were unable to recover it. 00:29:15.175 [2024-07-26 11:17:34.416410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.175 [2024-07-26 11:17:34.416570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.175 [2024-07-26 11:17:34.416588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.175 [2024-07-26 11:17:34.416595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.175 [2024-07-26 11:17:34.416600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.175 [2024-07-26 11:17:34.416618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.175 qpair failed and we were unable to recover it. 00:29:15.175 [2024-07-26 11:17:34.426421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.175 [2024-07-26 11:17:34.426574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.175 [2024-07-26 11:17:34.426592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.175 [2024-07-26 11:17:34.426599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.175 [2024-07-26 11:17:34.426605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.175 [2024-07-26 11:17:34.426622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.175 qpair failed and we were unable to recover it. 00:29:15.175 [2024-07-26 11:17:34.436455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.175 [2024-07-26 11:17:34.436610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.175 [2024-07-26 11:17:34.436627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.175 [2024-07-26 11:17:34.436634] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.175 [2024-07-26 11:17:34.436640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.175 [2024-07-26 11:17:34.436657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.175 qpair failed and we were unable to recover it. 00:29:15.175 [2024-07-26 11:17:34.446479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.175 [2024-07-26 11:17:34.446633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.175 [2024-07-26 11:17:34.446655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.175 [2024-07-26 11:17:34.446662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.175 [2024-07-26 11:17:34.446668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.175 [2024-07-26 11:17:34.446686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.175 qpair failed and we were unable to recover it. 00:29:15.175 [2024-07-26 11:17:34.456538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.175 [2024-07-26 11:17:34.456691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.175 [2024-07-26 11:17:34.456709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.175 [2024-07-26 11:17:34.456717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.175 [2024-07-26 11:17:34.456723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.175 [2024-07-26 11:17:34.456741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.175 qpair failed and we were unable to recover it. 00:29:15.175 [2024-07-26 11:17:34.466562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.175 [2024-07-26 11:17:34.466710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.175 [2024-07-26 11:17:34.466728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.175 [2024-07-26 11:17:34.466735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.175 [2024-07-26 11:17:34.466741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.175 [2024-07-26 11:17:34.466759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.175 qpair failed and we were unable to recover it. 00:29:15.175 [2024-07-26 11:17:34.476826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.175 [2024-07-26 11:17:34.476980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.175 [2024-07-26 11:17:34.476998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.175 [2024-07-26 11:17:34.477005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.175 [2024-07-26 11:17:34.477012] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.175 [2024-07-26 11:17:34.477030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.175 qpair failed and we were unable to recover it. 00:29:15.175 [2024-07-26 11:17:34.486530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.175 [2024-07-26 11:17:34.486692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.175 [2024-07-26 11:17:34.486710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.175 [2024-07-26 11:17:34.486717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.175 [2024-07-26 11:17:34.486726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.175 [2024-07-26 11:17:34.486744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.175 qpair failed and we were unable to recover it. 00:29:15.175 [2024-07-26 11:17:34.496685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.175 [2024-07-26 11:17:34.496836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.175 [2024-07-26 11:17:34.496853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.175 [2024-07-26 11:17:34.496860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.175 [2024-07-26 11:17:34.496866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.175 [2024-07-26 11:17:34.496884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.175 qpair failed and we were unable to recover it. 00:29:15.175 [2024-07-26 11:17:34.506601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.175 [2024-07-26 11:17:34.506751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.175 [2024-07-26 11:17:34.506769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.175 [2024-07-26 11:17:34.506776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.175 [2024-07-26 11:17:34.506782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.175 [2024-07-26 11:17:34.506799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.175 qpair failed and we were unable to recover it. 00:29:15.175 [2024-07-26 11:17:34.516699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.175 [2024-07-26 11:17:34.516851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.176 [2024-07-26 11:17:34.516868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.176 [2024-07-26 11:17:34.516875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.176 [2024-07-26 11:17:34.516881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.176 [2024-07-26 11:17:34.516898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.176 qpair failed and we were unable to recover it. 00:29:15.176 [2024-07-26 11:17:34.526734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.176 [2024-07-26 11:17:34.526881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.176 [2024-07-26 11:17:34.526899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.176 [2024-07-26 11:17:34.526906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.176 [2024-07-26 11:17:34.526914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.176 [2024-07-26 11:17:34.526931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.176 qpair failed and we were unable to recover it. 00:29:15.176 [2024-07-26 11:17:34.536761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.176 [2024-07-26 11:17:34.536915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.176 [2024-07-26 11:17:34.536933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.176 [2024-07-26 11:17:34.536940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.176 [2024-07-26 11:17:34.536946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.176 [2024-07-26 11:17:34.536963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.176 qpair failed and we were unable to recover it. 00:29:15.176 [2024-07-26 11:17:34.546828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.176 [2024-07-26 11:17:34.546998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.176 [2024-07-26 11:17:34.547015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.176 [2024-07-26 11:17:34.547023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.176 [2024-07-26 11:17:34.547029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.176 [2024-07-26 11:17:34.547054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.176 qpair failed and we were unable to recover it. 00:29:15.176 [2024-07-26 11:17:34.556821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.176 [2024-07-26 11:17:34.556972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.176 [2024-07-26 11:17:34.556989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.176 [2024-07-26 11:17:34.556996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.176 [2024-07-26 11:17:34.557002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.176 [2024-07-26 11:17:34.557018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.176 qpair failed and we were unable to recover it. 00:29:15.176 [2024-07-26 11:17:34.566834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.176 [2024-07-26 11:17:34.567010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.176 [2024-07-26 11:17:34.567028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.176 [2024-07-26 11:17:34.567036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.176 [2024-07-26 11:17:34.567048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.176 [2024-07-26 11:17:34.567066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.176 qpair failed and we were unable to recover it. 00:29:15.176 [2024-07-26 11:17:34.576869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.176 [2024-07-26 11:17:34.577019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.176 [2024-07-26 11:17:34.577037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.176 [2024-07-26 11:17:34.577053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.176 [2024-07-26 11:17:34.577060] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.176 [2024-07-26 11:17:34.577077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.176 qpair failed and we were unable to recover it. 00:29:15.176 [2024-07-26 11:17:34.586914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.176 [2024-07-26 11:17:34.587068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.176 [2024-07-26 11:17:34.587086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.176 [2024-07-26 11:17:34.587093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.176 [2024-07-26 11:17:34.587099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.176 [2024-07-26 11:17:34.587117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.176 qpair failed and we were unable to recover it. 00:29:15.176 [2024-07-26 11:17:34.596909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.176 [2024-07-26 11:17:34.597065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.176 [2024-07-26 11:17:34.597082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.176 [2024-07-26 11:17:34.597089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.176 [2024-07-26 11:17:34.597095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.176 [2024-07-26 11:17:34.597113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.176 qpair failed and we were unable to recover it. 00:29:15.176 [2024-07-26 11:17:34.606946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.176 [2024-07-26 11:17:34.607103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.176 [2024-07-26 11:17:34.607122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.176 [2024-07-26 11:17:34.607129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.176 [2024-07-26 11:17:34.607135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.176 [2024-07-26 11:17:34.607152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.176 qpair failed and we were unable to recover it. 00:29:15.176 [2024-07-26 11:17:34.616985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.176 [2024-07-26 11:17:34.617142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.176 [2024-07-26 11:17:34.617160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.176 [2024-07-26 11:17:34.617167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.176 [2024-07-26 11:17:34.617173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.176 [2024-07-26 11:17:34.617190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.176 qpair failed and we were unable to recover it. 00:29:15.176 [2024-07-26 11:17:34.627019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.176 [2024-07-26 11:17:34.627172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.176 [2024-07-26 11:17:34.627190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.176 [2024-07-26 11:17:34.627197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.176 [2024-07-26 11:17:34.627203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.176 [2024-07-26 11:17:34.627220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.176 qpair failed and we were unable to recover it. 00:29:15.176 [2024-07-26 11:17:34.637032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.176 [2024-07-26 11:17:34.637192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.176 [2024-07-26 11:17:34.637210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.176 [2024-07-26 11:17:34.637217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.176 [2024-07-26 11:17:34.637223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.176 [2024-07-26 11:17:34.637241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.176 qpair failed and we were unable to recover it. 00:29:15.176 [2024-07-26 11:17:34.646996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.176 [2024-07-26 11:17:34.647158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.176 [2024-07-26 11:17:34.647176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.177 [2024-07-26 11:17:34.647183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.177 [2024-07-26 11:17:34.647189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.177 [2024-07-26 11:17:34.647206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.177 qpair failed and we were unable to recover it. 00:29:15.177 [2024-07-26 11:17:34.657098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.177 [2024-07-26 11:17:34.657246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.177 [2024-07-26 11:17:34.657264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.177 [2024-07-26 11:17:34.657271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.177 [2024-07-26 11:17:34.657277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.177 [2024-07-26 11:17:34.657294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.177 qpair failed and we were unable to recover it. 00:29:15.177 [2024-07-26 11:17:34.667118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.177 [2024-07-26 11:17:34.667272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.177 [2024-07-26 11:17:34.667290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.177 [2024-07-26 11:17:34.667300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.177 [2024-07-26 11:17:34.667307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.177 [2024-07-26 11:17:34.667323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.177 qpair failed and we were unable to recover it. 00:29:15.439 [2024-07-26 11:17:34.677133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.439 [2024-07-26 11:17:34.677290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.439 [2024-07-26 11:17:34.677308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.439 [2024-07-26 11:17:34.677315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.439 [2024-07-26 11:17:34.677322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.439 [2024-07-26 11:17:34.677340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.439 qpair failed and we were unable to recover it. 00:29:15.439 [2024-07-26 11:17:34.687189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.439 [2024-07-26 11:17:34.687333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.439 [2024-07-26 11:17:34.687351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.439 [2024-07-26 11:17:34.687358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.439 [2024-07-26 11:17:34.687364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.439 [2024-07-26 11:17:34.687381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.439 qpair failed and we were unable to recover it. 00:29:15.439 [2024-07-26 11:17:34.697253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.439 [2024-07-26 11:17:34.697407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.439 [2024-07-26 11:17:34.697425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.439 [2024-07-26 11:17:34.697432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.439 [2024-07-26 11:17:34.697438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.439 [2024-07-26 11:17:34.697456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.439 qpair failed and we were unable to recover it. 00:29:15.439 [2024-07-26 11:17:34.707178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.439 [2024-07-26 11:17:34.707327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.439 [2024-07-26 11:17:34.707345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.439 [2024-07-26 11:17:34.707352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.439 [2024-07-26 11:17:34.707358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.439 [2024-07-26 11:17:34.707375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.439 qpair failed and we were unable to recover it. 00:29:15.439 [2024-07-26 11:17:34.717247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.439 [2024-07-26 11:17:34.717399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.439 [2024-07-26 11:17:34.717417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.439 [2024-07-26 11:17:34.717424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.439 [2024-07-26 11:17:34.717432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.439 [2024-07-26 11:17:34.717449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.439 qpair failed and we were unable to recover it. 00:29:15.439 [2024-07-26 11:17:34.727298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.439 [2024-07-26 11:17:34.727451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.439 [2024-07-26 11:17:34.727468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.439 [2024-07-26 11:17:34.727475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.439 [2024-07-26 11:17:34.727482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.439 [2024-07-26 11:17:34.727499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.439 qpair failed and we were unable to recover it. 00:29:15.439 [2024-07-26 11:17:34.737269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.439 [2024-07-26 11:17:34.737438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.439 [2024-07-26 11:17:34.737456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.439 [2024-07-26 11:17:34.737464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.439 [2024-07-26 11:17:34.737473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.439 [2024-07-26 11:17:34.737491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.439 qpair failed and we were unable to recover it. 00:29:15.439 [2024-07-26 11:17:34.747363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.439 [2024-07-26 11:17:34.747514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.439 [2024-07-26 11:17:34.747532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.439 [2024-07-26 11:17:34.747539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.439 [2024-07-26 11:17:34.747545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.439 [2024-07-26 11:17:34.747563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.439 qpair failed and we were unable to recover it. 00:29:15.439 [2024-07-26 11:17:34.757396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.439 [2024-07-26 11:17:34.757550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.439 [2024-07-26 11:17:34.757572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.439 [2024-07-26 11:17:34.757579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.439 [2024-07-26 11:17:34.757585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.439 [2024-07-26 11:17:34.757602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.439 qpair failed and we were unable to recover it. 00:29:15.439 [2024-07-26 11:17:34.767418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.439 [2024-07-26 11:17:34.767565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.439 [2024-07-26 11:17:34.767583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.439 [2024-07-26 11:17:34.767590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.439 [2024-07-26 11:17:34.767596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.439 [2024-07-26 11:17:34.767613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.439 qpair failed and we were unable to recover it. 00:29:15.439 [2024-07-26 11:17:34.777438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.439 [2024-07-26 11:17:34.777588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.439 [2024-07-26 11:17:34.777606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.439 [2024-07-26 11:17:34.777613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.439 [2024-07-26 11:17:34.777619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.439 [2024-07-26 11:17:34.777637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.439 qpair failed and we were unable to recover it. 00:29:15.439 [2024-07-26 11:17:34.787495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.439 [2024-07-26 11:17:34.787642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.440 [2024-07-26 11:17:34.787660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.440 [2024-07-26 11:17:34.787667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.440 [2024-07-26 11:17:34.787673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.440 [2024-07-26 11:17:34.787690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.440 qpair failed and we were unable to recover it. 00:29:15.440 [2024-07-26 11:17:34.797473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.440 [2024-07-26 11:17:34.797626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.440 [2024-07-26 11:17:34.797644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.440 [2024-07-26 11:17:34.797651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.440 [2024-07-26 11:17:34.797658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.440 [2024-07-26 11:17:34.797679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.440 qpair failed and we were unable to recover it. 00:29:15.440 [2024-07-26 11:17:34.807551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.440 [2024-07-26 11:17:34.807723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.440 [2024-07-26 11:17:34.807741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.440 [2024-07-26 11:17:34.807748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.440 [2024-07-26 11:17:34.807754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.440 [2024-07-26 11:17:34.807772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.440 qpair failed and we were unable to recover it. 00:29:15.440 [2024-07-26 11:17:34.817557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.440 [2024-07-26 11:17:34.817708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.440 [2024-07-26 11:17:34.817726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.440 [2024-07-26 11:17:34.817733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.440 [2024-07-26 11:17:34.817740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.440 [2024-07-26 11:17:34.817757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.440 qpair failed and we were unable to recover it. 00:29:15.440 [2024-07-26 11:17:34.827583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.440 [2024-07-26 11:17:34.827731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.440 [2024-07-26 11:17:34.827749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.440 [2024-07-26 11:17:34.827756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.440 [2024-07-26 11:17:34.827762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.440 [2024-07-26 11:17:34.827779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.440 qpair failed and we were unable to recover it. 00:29:15.440 [2024-07-26 11:17:34.837613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.440 [2024-07-26 11:17:34.837763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.440 [2024-07-26 11:17:34.837780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.440 [2024-07-26 11:17:34.837787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.440 [2024-07-26 11:17:34.837794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.440 [2024-07-26 11:17:34.837811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.440 qpair failed and we were unable to recover it. 00:29:15.440 [2024-07-26 11:17:34.847649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.440 [2024-07-26 11:17:34.847800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.440 [2024-07-26 11:17:34.847821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.440 [2024-07-26 11:17:34.847828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.440 [2024-07-26 11:17:34.847834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.440 [2024-07-26 11:17:34.847852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.440 qpair failed and we were unable to recover it. 00:29:15.440 [2024-07-26 11:17:34.857642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.440 [2024-07-26 11:17:34.857802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.440 [2024-07-26 11:17:34.857820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.440 [2024-07-26 11:17:34.857827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.440 [2024-07-26 11:17:34.857833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.440 [2024-07-26 11:17:34.857850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.440 qpair failed and we were unable to recover it. 00:29:15.440 [2024-07-26 11:17:34.867698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.440 [2024-07-26 11:17:34.867850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.440 [2024-07-26 11:17:34.867868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.440 [2024-07-26 11:17:34.867875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.440 [2024-07-26 11:17:34.867880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.440 [2024-07-26 11:17:34.867898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.440 qpair failed and we were unable to recover it. 00:29:15.440 [2024-07-26 11:17:34.877713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.440 [2024-07-26 11:17:34.877863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.440 [2024-07-26 11:17:34.877882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.440 [2024-07-26 11:17:34.877889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.440 [2024-07-26 11:17:34.877894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.440 [2024-07-26 11:17:34.877912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.440 qpair failed and we were unable to recover it. 00:29:15.440 [2024-07-26 11:17:34.887723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.440 [2024-07-26 11:17:34.887868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.440 [2024-07-26 11:17:34.887886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.440 [2024-07-26 11:17:34.887893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.440 [2024-07-26 11:17:34.887903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.440 [2024-07-26 11:17:34.887920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.440 qpair failed and we were unable to recover it. 00:29:15.440 [2024-07-26 11:17:34.897774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.440 [2024-07-26 11:17:34.897924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.440 [2024-07-26 11:17:34.897942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.440 [2024-07-26 11:17:34.897949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.440 [2024-07-26 11:17:34.897954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.440 [2024-07-26 11:17:34.897972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.440 qpair failed and we were unable to recover it. 00:29:15.440 [2024-07-26 11:17:34.907860] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.440 [2024-07-26 11:17:34.908005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.440 [2024-07-26 11:17:34.908023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.440 [2024-07-26 11:17:34.908030] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.440 [2024-07-26 11:17:34.908036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.440 [2024-07-26 11:17:34.908059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.440 qpair failed and we were unable to recover it. 00:29:15.440 [2024-07-26 11:17:34.917796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.440 [2024-07-26 11:17:34.917944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.440 [2024-07-26 11:17:34.917962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.440 [2024-07-26 11:17:34.917969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.441 [2024-07-26 11:17:34.917975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.441 [2024-07-26 11:17:34.917993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.441 qpair failed and we were unable to recover it. 00:29:15.441 [2024-07-26 11:17:34.927791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.441 [2024-07-26 11:17:34.927947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.441 [2024-07-26 11:17:34.927965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.441 [2024-07-26 11:17:34.927972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.441 [2024-07-26 11:17:34.927977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.441 [2024-07-26 11:17:34.927995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.441 qpair failed and we were unable to recover it. 00:29:15.702 [2024-07-26 11:17:34.937888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.702 [2024-07-26 11:17:34.938054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.702 [2024-07-26 11:17:34.938073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.702 [2024-07-26 11:17:34.938079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.702 [2024-07-26 11:17:34.938085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.702 [2024-07-26 11:17:34.938103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.702 qpair failed and we were unable to recover it. 00:29:15.702 [2024-07-26 11:17:34.947936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.702 [2024-07-26 11:17:34.948094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.703 [2024-07-26 11:17:34.948112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.703 [2024-07-26 11:17:34.948119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.703 [2024-07-26 11:17:34.948125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.703 [2024-07-26 11:17:34.948142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-07-26 11:17:34.957940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.703 [2024-07-26 11:17:34.958101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.703 [2024-07-26 11:17:34.958119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.703 [2024-07-26 11:17:34.958126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.703 [2024-07-26 11:17:34.958132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.703 [2024-07-26 11:17:34.958149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-07-26 11:17:34.968014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.703 [2024-07-26 11:17:34.968182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.703 [2024-07-26 11:17:34.968201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.703 [2024-07-26 11:17:34.968208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.703 [2024-07-26 11:17:34.968213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.703 [2024-07-26 11:17:34.968231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-07-26 11:17:34.978005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.703 [2024-07-26 11:17:34.978157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.703 [2024-07-26 11:17:34.978175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.703 [2024-07-26 11:17:34.978182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.703 [2024-07-26 11:17:34.978191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.703 [2024-07-26 11:17:34.978209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-07-26 11:17:34.988033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.703 [2024-07-26 11:17:34.988205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.703 [2024-07-26 11:17:34.988223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.703 [2024-07-26 11:17:34.988230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.703 [2024-07-26 11:17:34.988236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.703 [2024-07-26 11:17:34.988253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-07-26 11:17:34.998058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.703 [2024-07-26 11:17:34.998207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.703 [2024-07-26 11:17:34.998224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.703 [2024-07-26 11:17:34.998231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.703 [2024-07-26 11:17:34.998237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.703 [2024-07-26 11:17:34.998254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-07-26 11:17:35.008087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.703 [2024-07-26 11:17:35.008238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.703 [2024-07-26 11:17:35.008256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.703 [2024-07-26 11:17:35.008262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.703 [2024-07-26 11:17:35.008268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.703 [2024-07-26 11:17:35.008286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-07-26 11:17:35.018117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.703 [2024-07-26 11:17:35.018263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.703 [2024-07-26 11:17:35.018281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.703 [2024-07-26 11:17:35.018288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.703 [2024-07-26 11:17:35.018294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.703 [2024-07-26 11:17:35.018312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-07-26 11:17:35.028143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.703 [2024-07-26 11:17:35.028293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.703 [2024-07-26 11:17:35.028310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.703 [2024-07-26 11:17:35.028317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.703 [2024-07-26 11:17:35.028323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.703 [2024-07-26 11:17:35.028340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-07-26 11:17:35.038165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.703 [2024-07-26 11:17:35.038320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.703 [2024-07-26 11:17:35.038338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.703 [2024-07-26 11:17:35.038345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.703 [2024-07-26 11:17:35.038351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.703 [2024-07-26 11:17:35.038369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-07-26 11:17:35.048238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.703 [2024-07-26 11:17:35.048384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.703 [2024-07-26 11:17:35.048402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.703 [2024-07-26 11:17:35.048410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.703 [2024-07-26 11:17:35.048417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.703 [2024-07-26 11:17:35.048434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-07-26 11:17:35.058236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.703 [2024-07-26 11:17:35.058383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.703 [2024-07-26 11:17:35.058401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.703 [2024-07-26 11:17:35.058407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.703 [2024-07-26 11:17:35.058413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.703 [2024-07-26 11:17:35.058431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-07-26 11:17:35.068209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.703 [2024-07-26 11:17:35.068363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.703 [2024-07-26 11:17:35.068381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.703 [2024-07-26 11:17:35.068391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.703 [2024-07-26 11:17:35.068398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.703 [2024-07-26 11:17:35.068414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.703 qpair failed and we were unable to recover it. 00:29:15.703 [2024-07-26 11:17:35.078299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.703 [2024-07-26 11:17:35.078449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.704 [2024-07-26 11:17:35.078467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.704 [2024-07-26 11:17:35.078475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.704 [2024-07-26 11:17:35.078480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.704 [2024-07-26 11:17:35.078498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.704 qpair failed and we were unable to recover it. 00:29:15.704 [2024-07-26 11:17:35.088323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.704 [2024-07-26 11:17:35.088492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.704 [2024-07-26 11:17:35.088510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.704 [2024-07-26 11:17:35.088517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.704 [2024-07-26 11:17:35.088523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.704 [2024-07-26 11:17:35.088541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.704 qpair failed and we were unable to recover it. 00:29:15.704 [2024-07-26 11:17:35.098350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.704 [2024-07-26 11:17:35.098497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.704 [2024-07-26 11:17:35.098515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.704 [2024-07-26 11:17:35.098522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.704 [2024-07-26 11:17:35.098528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.704 [2024-07-26 11:17:35.098545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.704 qpair failed and we were unable to recover it. 00:29:15.704 [2024-07-26 11:17:35.108385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.704 [2024-07-26 11:17:35.108538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.704 [2024-07-26 11:17:35.108556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.704 [2024-07-26 11:17:35.108563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.704 [2024-07-26 11:17:35.108569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.704 [2024-07-26 11:17:35.108587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.704 qpair failed and we were unable to recover it. 00:29:15.704 [2024-07-26 11:17:35.118342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.704 [2024-07-26 11:17:35.118490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.704 [2024-07-26 11:17:35.118508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.704 [2024-07-26 11:17:35.118515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.704 [2024-07-26 11:17:35.118520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.704 [2024-07-26 11:17:35.118537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.704 qpair failed and we were unable to recover it. 00:29:15.704 [2024-07-26 11:17:35.128447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.704 [2024-07-26 11:17:35.128600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.704 [2024-07-26 11:17:35.128618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.704 [2024-07-26 11:17:35.128625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.704 [2024-07-26 11:17:35.128631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.704 [2024-07-26 11:17:35.128648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.704 qpair failed and we were unable to recover it. 00:29:15.704 [2024-07-26 11:17:35.138472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.704 [2024-07-26 11:17:35.138614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.704 [2024-07-26 11:17:35.138631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.704 [2024-07-26 11:17:35.138638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.704 [2024-07-26 11:17:35.138644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.704 [2024-07-26 11:17:35.138661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.704 qpair failed and we were unable to recover it. 00:29:15.704 [2024-07-26 11:17:35.148512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.704 [2024-07-26 11:17:35.148663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.704 [2024-07-26 11:17:35.148681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.704 [2024-07-26 11:17:35.148688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.704 [2024-07-26 11:17:35.148694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.704 [2024-07-26 11:17:35.148711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.704 qpair failed and we were unable to recover it. 00:29:15.704 [2024-07-26 11:17:35.158525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.704 [2024-07-26 11:17:35.158673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.704 [2024-07-26 11:17:35.158696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.704 [2024-07-26 11:17:35.158703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.704 [2024-07-26 11:17:35.158709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.704 [2024-07-26 11:17:35.158727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.704 qpair failed and we were unable to recover it. 00:29:15.704 [2024-07-26 11:17:35.168579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.704 [2024-07-26 11:17:35.168730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.704 [2024-07-26 11:17:35.168748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.704 [2024-07-26 11:17:35.168755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.704 [2024-07-26 11:17:35.168760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.704 [2024-07-26 11:17:35.168778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.704 qpair failed and we were unable to recover it. 00:29:15.704 [2024-07-26 11:17:35.178586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.704 [2024-07-26 11:17:35.178734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.704 [2024-07-26 11:17:35.178752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.704 [2024-07-26 11:17:35.178759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.704 [2024-07-26 11:17:35.178765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.704 [2024-07-26 11:17:35.178782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.704 qpair failed and we were unable to recover it. 00:29:15.704 [2024-07-26 11:17:35.188527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.704 [2024-07-26 11:17:35.188694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.704 [2024-07-26 11:17:35.188712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.704 [2024-07-26 11:17:35.188718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.704 [2024-07-26 11:17:35.188724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.704 [2024-07-26 11:17:35.188742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.704 qpair failed and we were unable to recover it. 00:29:15.966 [2024-07-26 11:17:35.198610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.966 [2024-07-26 11:17:35.198765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.966 [2024-07-26 11:17:35.198784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.966 [2024-07-26 11:17:35.198792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.966 [2024-07-26 11:17:35.198798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.966 [2024-07-26 11:17:35.198819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.966 qpair failed and we were unable to recover it. 00:29:15.966 [2024-07-26 11:17:35.208649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.966 [2024-07-26 11:17:35.208800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.966 [2024-07-26 11:17:35.208818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.966 [2024-07-26 11:17:35.208825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.966 [2024-07-26 11:17:35.208831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.966 [2024-07-26 11:17:35.208849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.966 qpair failed and we were unable to recover it. 00:29:15.966 [2024-07-26 11:17:35.218697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.966 [2024-07-26 11:17:35.218847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.966 [2024-07-26 11:17:35.218865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.966 [2024-07-26 11:17:35.218872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.966 [2024-07-26 11:17:35.218878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.966 [2024-07-26 11:17:35.218896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.966 qpair failed and we were unable to recover it. 00:29:15.966 [2024-07-26 11:17:35.228736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.966 [2024-07-26 11:17:35.228990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.966 [2024-07-26 11:17:35.229008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.966 [2024-07-26 11:17:35.229015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.966 [2024-07-26 11:17:35.229021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.966 [2024-07-26 11:17:35.229037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.966 qpair failed and we were unable to recover it. 00:29:15.966 [2024-07-26 11:17:35.238690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.966 [2024-07-26 11:17:35.238841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.966 [2024-07-26 11:17:35.238859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.966 [2024-07-26 11:17:35.238866] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.966 [2024-07-26 11:17:35.238872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.966 [2024-07-26 11:17:35.238890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.966 qpair failed and we were unable to recover it. 00:29:15.966 [2024-07-26 11:17:35.248790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.966 [2024-07-26 11:17:35.248943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.966 [2024-07-26 11:17:35.248965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.966 [2024-07-26 11:17:35.248972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.966 [2024-07-26 11:17:35.248978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.966 [2024-07-26 11:17:35.248995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.966 qpair failed and we were unable to recover it. 00:29:15.967 [2024-07-26 11:17:35.258742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.967 [2024-07-26 11:17:35.258894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.967 [2024-07-26 11:17:35.258912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.967 [2024-07-26 11:17:35.258919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.967 [2024-07-26 11:17:35.258925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.967 [2024-07-26 11:17:35.258942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.967 qpair failed and we were unable to recover it. 00:29:15.967 [2024-07-26 11:17:35.268845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.967 [2024-07-26 11:17:35.268993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.967 [2024-07-26 11:17:35.269011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.967 [2024-07-26 11:17:35.269018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.967 [2024-07-26 11:17:35.269024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.967 [2024-07-26 11:17:35.269041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.967 qpair failed and we were unable to recover it. 00:29:15.967 [2024-07-26 11:17:35.278790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.967 [2024-07-26 11:17:35.278955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.967 [2024-07-26 11:17:35.278972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.967 [2024-07-26 11:17:35.278980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.967 [2024-07-26 11:17:35.278986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.967 [2024-07-26 11:17:35.279003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.967 qpair failed and we were unable to recover it. 00:29:15.967 [2024-07-26 11:17:35.288834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.967 [2024-07-26 11:17:35.288988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.967 [2024-07-26 11:17:35.289006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.967 [2024-07-26 11:17:35.289013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.967 [2024-07-26 11:17:35.289022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.967 [2024-07-26 11:17:35.289039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.967 qpair failed and we were unable to recover it. 00:29:15.967 [2024-07-26 11:17:35.298926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.967 [2024-07-26 11:17:35.299086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.967 [2024-07-26 11:17:35.299105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.967 [2024-07-26 11:17:35.299111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.967 [2024-07-26 11:17:35.299117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.967 [2024-07-26 11:17:35.299135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.967 qpair failed and we were unable to recover it. 00:29:15.967 [2024-07-26 11:17:35.308939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.967 [2024-07-26 11:17:35.309097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.967 [2024-07-26 11:17:35.309116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.967 [2024-07-26 11:17:35.309123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.967 [2024-07-26 11:17:35.309129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.967 [2024-07-26 11:17:35.309147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.967 qpair failed and we were unable to recover it. 00:29:15.967 [2024-07-26 11:17:35.318971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.967 [2024-07-26 11:17:35.319128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.967 [2024-07-26 11:17:35.319146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.967 [2024-07-26 11:17:35.319153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.967 [2024-07-26 11:17:35.319161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.967 [2024-07-26 11:17:35.319180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.967 qpair failed and we were unable to recover it. 00:29:15.967 [2024-07-26 11:17:35.329048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.967 [2024-07-26 11:17:35.329203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.967 [2024-07-26 11:17:35.329221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.967 [2024-07-26 11:17:35.329228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.967 [2024-07-26 11:17:35.329234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.967 [2024-07-26 11:17:35.329251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.967 qpair failed and we were unable to recover it. 00:29:15.967 [2024-07-26 11:17:35.339014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.967 [2024-07-26 11:17:35.339167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.967 [2024-07-26 11:17:35.339185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.967 [2024-07-26 11:17:35.339192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.967 [2024-07-26 11:17:35.339198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.967 [2024-07-26 11:17:35.339216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.967 qpair failed and we were unable to recover it. 00:29:15.967 [2024-07-26 11:17:35.349036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.967 [2024-07-26 11:17:35.349224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.967 [2024-07-26 11:17:35.349242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.967 [2024-07-26 11:17:35.349248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.968 [2024-07-26 11:17:35.349254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.968 [2024-07-26 11:17:35.349272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.968 qpair failed and we were unable to recover it. 00:29:15.968 [2024-07-26 11:17:35.359079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.968 [2024-07-26 11:17:35.359235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.968 [2024-07-26 11:17:35.359252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.968 [2024-07-26 11:17:35.359259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.968 [2024-07-26 11:17:35.359265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.968 [2024-07-26 11:17:35.359283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.968 qpair failed and we were unable to recover it. 00:29:15.968 [2024-07-26 11:17:35.369054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.968 [2024-07-26 11:17:35.369288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.968 [2024-07-26 11:17:35.369306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.968 [2024-07-26 11:17:35.369313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.968 [2024-07-26 11:17:35.369319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.968 [2024-07-26 11:17:35.369336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.968 qpair failed and we were unable to recover it. 00:29:15.968 [2024-07-26 11:17:35.379128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.968 [2024-07-26 11:17:35.379276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.968 [2024-07-26 11:17:35.379294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.968 [2024-07-26 11:17:35.379301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.968 [2024-07-26 11:17:35.379311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.968 [2024-07-26 11:17:35.379328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.968 qpair failed and we were unable to recover it. 00:29:15.968 [2024-07-26 11:17:35.389116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.968 [2024-07-26 11:17:35.389267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.968 [2024-07-26 11:17:35.389285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.968 [2024-07-26 11:17:35.389292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.968 [2024-07-26 11:17:35.389298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.968 [2024-07-26 11:17:35.389315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.968 qpair failed and we were unable to recover it. 00:29:15.968 [2024-07-26 11:17:35.399218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.968 [2024-07-26 11:17:35.399370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.968 [2024-07-26 11:17:35.399388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.968 [2024-07-26 11:17:35.399395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.968 [2024-07-26 11:17:35.399401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.968 [2024-07-26 11:17:35.399418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.968 qpair failed and we were unable to recover it. 00:29:15.968 [2024-07-26 11:17:35.409275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.968 [2024-07-26 11:17:35.409426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.968 [2024-07-26 11:17:35.409443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.968 [2024-07-26 11:17:35.409451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.968 [2024-07-26 11:17:35.409457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.968 [2024-07-26 11:17:35.409474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.968 qpair failed and we were unable to recover it. 00:29:15.968 [2024-07-26 11:17:35.419215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.968 [2024-07-26 11:17:35.419363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.968 [2024-07-26 11:17:35.419381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.968 [2024-07-26 11:17:35.419388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.968 [2024-07-26 11:17:35.419394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.968 [2024-07-26 11:17:35.419411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.968 qpair failed and we were unable to recover it. 00:29:15.968 [2024-07-26 11:17:35.429312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.968 [2024-07-26 11:17:35.429465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.968 [2024-07-26 11:17:35.429483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.968 [2024-07-26 11:17:35.429490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.968 [2024-07-26 11:17:35.429496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.968 [2024-07-26 11:17:35.429514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.968 qpair failed and we were unable to recover it. 00:29:15.968 [2024-07-26 11:17:35.439265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.968 [2024-07-26 11:17:35.439419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.968 [2024-07-26 11:17:35.439436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.968 [2024-07-26 11:17:35.439443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.968 [2024-07-26 11:17:35.439449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.968 [2024-07-26 11:17:35.439467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.968 qpair failed and we were unable to recover it. 00:29:15.968 [2024-07-26 11:17:35.449383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.969 [2024-07-26 11:17:35.449534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.969 [2024-07-26 11:17:35.449552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.969 [2024-07-26 11:17:35.449559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.969 [2024-07-26 11:17:35.449565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.969 [2024-07-26 11:17:35.449582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.969 qpair failed and we were unable to recover it. 00:29:15.969 [2024-07-26 11:17:35.459327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.969 [2024-07-26 11:17:35.459478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.969 [2024-07-26 11:17:35.459496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.969 [2024-07-26 11:17:35.459503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.969 [2024-07-26 11:17:35.459509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:15.969 [2024-07-26 11:17:35.459526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:15.969 qpair failed and we were unable to recover it. 00:29:16.231 [2024-07-26 11:17:35.469409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.231 [2024-07-26 11:17:35.469562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.231 [2024-07-26 11:17:35.469580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.231 [2024-07-26 11:17:35.469592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.231 [2024-07-26 11:17:35.469599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.231 [2024-07-26 11:17:35.469616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.231 qpair failed and we were unable to recover it. 00:29:16.231 [2024-07-26 11:17:35.479426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.231 [2024-07-26 11:17:35.479580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.231 [2024-07-26 11:17:35.479599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.231 [2024-07-26 11:17:35.479606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.231 [2024-07-26 11:17:35.479612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.231 [2024-07-26 11:17:35.479630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.231 qpair failed and we were unable to recover it. 00:29:16.231 [2024-07-26 11:17:35.489415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.231 [2024-07-26 11:17:35.489602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.232 [2024-07-26 11:17:35.489620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.232 [2024-07-26 11:17:35.489628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.232 [2024-07-26 11:17:35.489634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.232 [2024-07-26 11:17:35.489652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.232 qpair failed and we were unable to recover it. 00:29:16.232 [2024-07-26 11:17:35.499433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.232 [2024-07-26 11:17:35.499592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.232 [2024-07-26 11:17:35.499610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.232 [2024-07-26 11:17:35.499617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.232 [2024-07-26 11:17:35.499623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.232 [2024-07-26 11:17:35.499641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.232 qpair failed and we were unable to recover it. 00:29:16.232 [2024-07-26 11:17:35.509546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.232 [2024-07-26 11:17:35.509700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.232 [2024-07-26 11:17:35.509720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.232 [2024-07-26 11:17:35.509727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.232 [2024-07-26 11:17:35.509733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.232 [2024-07-26 11:17:35.509750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.232 qpair failed and we were unable to recover it. 00:29:16.232 [2024-07-26 11:17:35.519580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.232 [2024-07-26 11:17:35.519736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.232 [2024-07-26 11:17:35.519754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.232 [2024-07-26 11:17:35.519762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.232 [2024-07-26 11:17:35.519769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.232 [2024-07-26 11:17:35.519786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.232 qpair failed and we were unable to recover it. 00:29:16.232 [2024-07-26 11:17:35.529606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.232 [2024-07-26 11:17:35.529758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.232 [2024-07-26 11:17:35.529776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.232 [2024-07-26 11:17:35.529783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.232 [2024-07-26 11:17:35.529789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.232 [2024-07-26 11:17:35.529807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.232 qpair failed and we were unable to recover it. 00:29:16.232 [2024-07-26 11:17:35.539575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.232 [2024-07-26 11:17:35.539723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.232 [2024-07-26 11:17:35.539741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.232 [2024-07-26 11:17:35.539748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.232 [2024-07-26 11:17:35.539754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.232 [2024-07-26 11:17:35.539771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.232 qpair failed and we were unable to recover it. 00:29:16.232 [2024-07-26 11:17:35.549601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.232 [2024-07-26 11:17:35.549752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.232 [2024-07-26 11:17:35.549770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.232 [2024-07-26 11:17:35.549777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.232 [2024-07-26 11:17:35.549783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.232 [2024-07-26 11:17:35.549801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.232 qpair failed and we were unable to recover it. 00:29:16.232 [2024-07-26 11:17:35.559718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.232 [2024-07-26 11:17:35.559907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.232 [2024-07-26 11:17:35.559929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.232 [2024-07-26 11:17:35.559936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.232 [2024-07-26 11:17:35.559942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.232 [2024-07-26 11:17:35.559959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.232 qpair failed and we were unable to recover it. 00:29:16.232 [2024-07-26 11:17:35.569694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.232 [2024-07-26 11:17:35.569843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.232 [2024-07-26 11:17:35.569862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.232 [2024-07-26 11:17:35.569869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.232 [2024-07-26 11:17:35.569875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.232 [2024-07-26 11:17:35.569893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.232 qpair failed and we were unable to recover it. 00:29:16.232 [2024-07-26 11:17:35.579759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.232 [2024-07-26 11:17:35.579912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.232 [2024-07-26 11:17:35.579930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.232 [2024-07-26 11:17:35.579937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.232 [2024-07-26 11:17:35.579943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.232 [2024-07-26 11:17:35.579960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.232 qpair failed and we were unable to recover it. 00:29:16.232 [2024-07-26 11:17:35.589705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.232 [2024-07-26 11:17:35.589856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.232 [2024-07-26 11:17:35.589874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.232 [2024-07-26 11:17:35.589881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.232 [2024-07-26 11:17:35.589887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.232 [2024-07-26 11:17:35.589904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.232 qpair failed and we were unable to recover it. 00:29:16.232 [2024-07-26 11:17:35.599717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.232 [2024-07-26 11:17:35.599872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.232 [2024-07-26 11:17:35.599889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.232 [2024-07-26 11:17:35.599896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.232 [2024-07-26 11:17:35.599902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.232 [2024-07-26 11:17:35.599924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.232 qpair failed and we were unable to recover it. 00:29:16.232 [2024-07-26 11:17:35.609744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.232 [2024-07-26 11:17:35.609910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.232 [2024-07-26 11:17:35.609929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.232 [2024-07-26 11:17:35.609935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.233 [2024-07-26 11:17:35.609941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.233 [2024-07-26 11:17:35.609959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.233 qpair failed and we were unable to recover it. 00:29:16.233 [2024-07-26 11:17:35.619775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.233 [2024-07-26 11:17:35.619921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.233 [2024-07-26 11:17:35.619938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.233 [2024-07-26 11:17:35.619945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.233 [2024-07-26 11:17:35.619952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.233 [2024-07-26 11:17:35.619969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.233 qpair failed and we were unable to recover it. 00:29:16.233 [2024-07-26 11:17:35.629802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.233 [2024-07-26 11:17:35.629955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.233 [2024-07-26 11:17:35.629972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.233 [2024-07-26 11:17:35.629980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.233 [2024-07-26 11:17:35.629986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.233 [2024-07-26 11:17:35.630003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.233 qpair failed and we were unable to recover it. 00:29:16.233 [2024-07-26 11:17:35.639910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.233 [2024-07-26 11:17:35.640063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.233 [2024-07-26 11:17:35.640081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.233 [2024-07-26 11:17:35.640087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.233 [2024-07-26 11:17:35.640093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.233 [2024-07-26 11:17:35.640111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.233 qpair failed and we were unable to recover it. 00:29:16.233 [2024-07-26 11:17:35.649950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.233 [2024-07-26 11:17:35.650106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.233 [2024-07-26 11:17:35.650128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.233 [2024-07-26 11:17:35.650135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.233 [2024-07-26 11:17:35.650141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.233 [2024-07-26 11:17:35.650158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.233 qpair failed and we were unable to recover it. 00:29:16.233 [2024-07-26 11:17:35.660001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.233 [2024-07-26 11:17:35.660157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.233 [2024-07-26 11:17:35.660174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.233 [2024-07-26 11:17:35.660181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.233 [2024-07-26 11:17:35.660187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.233 [2024-07-26 11:17:35.660204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.233 qpair failed and we were unable to recover it. 00:29:16.233 [2024-07-26 11:17:35.670009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.233 [2024-07-26 11:17:35.670162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.233 [2024-07-26 11:17:35.670180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.233 [2024-07-26 11:17:35.670187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.233 [2024-07-26 11:17:35.670193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.233 [2024-07-26 11:17:35.670211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.233 qpair failed and we were unable to recover it. 00:29:16.233 [2024-07-26 11:17:35.680024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.233 [2024-07-26 11:17:35.680197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.233 [2024-07-26 11:17:35.680215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.233 [2024-07-26 11:17:35.680222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.233 [2024-07-26 11:17:35.680229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.233 [2024-07-26 11:17:35.680246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.233 qpair failed and we were unable to recover it. 00:29:16.233 [2024-07-26 11:17:35.690059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.233 [2024-07-26 11:17:35.690208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.233 [2024-07-26 11:17:35.690226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.233 [2024-07-26 11:17:35.690233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.233 [2024-07-26 11:17:35.690240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.233 [2024-07-26 11:17:35.690260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.233 qpair failed and we were unable to recover it. 00:29:16.233 [2024-07-26 11:17:35.700086] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.233 [2024-07-26 11:17:35.700237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.233 [2024-07-26 11:17:35.700254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.233 [2024-07-26 11:17:35.700261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.233 [2024-07-26 11:17:35.700267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.233 [2024-07-26 11:17:35.700285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.233 qpair failed and we were unable to recover it. 00:29:16.233 [2024-07-26 11:17:35.710107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.233 [2024-07-26 11:17:35.710255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.233 [2024-07-26 11:17:35.710272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.233 [2024-07-26 11:17:35.710279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.233 [2024-07-26 11:17:35.710285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.233 [2024-07-26 11:17:35.710303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.233 qpair failed and we were unable to recover it. 00:29:16.234 [2024-07-26 11:17:35.720062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.234 [2024-07-26 11:17:35.720252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.234 [2024-07-26 11:17:35.720270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.234 [2024-07-26 11:17:35.720277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.234 [2024-07-26 11:17:35.720283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.234 [2024-07-26 11:17:35.720301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.234 qpair failed and we were unable to recover it. 00:29:16.496 [2024-07-26 11:17:35.730189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.496 [2024-07-26 11:17:35.730352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.496 [2024-07-26 11:17:35.730370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.496 [2024-07-26 11:17:35.730377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.496 [2024-07-26 11:17:35.730384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.496 [2024-07-26 11:17:35.730402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.496 qpair failed and we were unable to recover it. 00:29:16.496 [2024-07-26 11:17:35.740211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.496 [2024-07-26 11:17:35.740369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.496 [2024-07-26 11:17:35.740387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.496 [2024-07-26 11:17:35.740394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.496 [2024-07-26 11:17:35.740399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.496 [2024-07-26 11:17:35.740417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.496 qpair failed and we were unable to recover it. 00:29:16.496 [2024-07-26 11:17:35.750229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.496 [2024-07-26 11:17:35.750378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.496 [2024-07-26 11:17:35.750395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.496 [2024-07-26 11:17:35.750403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.496 [2024-07-26 11:17:35.750409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.496 [2024-07-26 11:17:35.750426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.496 qpair failed and we were unable to recover it. 00:29:16.496 [2024-07-26 11:17:35.760234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.496 [2024-07-26 11:17:35.760382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.496 [2024-07-26 11:17:35.760400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.496 [2024-07-26 11:17:35.760407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.496 [2024-07-26 11:17:35.760413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.496 [2024-07-26 11:17:35.760431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.496 qpair failed and we were unable to recover it. 00:29:16.496 [2024-07-26 11:17:35.770279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.496 [2024-07-26 11:17:35.770428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.496 [2024-07-26 11:17:35.770446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.496 [2024-07-26 11:17:35.770453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.496 [2024-07-26 11:17:35.770459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.496 [2024-07-26 11:17:35.770476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.496 qpair failed and we were unable to recover it. 00:29:16.496 [2024-07-26 11:17:35.780310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.496 [2024-07-26 11:17:35.780462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.496 [2024-07-26 11:17:35.780480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.496 [2024-07-26 11:17:35.780487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.496 [2024-07-26 11:17:35.780496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.496 [2024-07-26 11:17:35.780513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.496 qpair failed and we were unable to recover it. 00:29:16.496 [2024-07-26 11:17:35.790258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.496 [2024-07-26 11:17:35.790409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.497 [2024-07-26 11:17:35.790427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.497 [2024-07-26 11:17:35.790434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.497 [2024-07-26 11:17:35.790440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.497 [2024-07-26 11:17:35.790457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.497 qpair failed and we were unable to recover it. 00:29:16.497 [2024-07-26 11:17:35.800346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.497 [2024-07-26 11:17:35.800497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.497 [2024-07-26 11:17:35.800515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.497 [2024-07-26 11:17:35.800522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.497 [2024-07-26 11:17:35.800528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.497 [2024-07-26 11:17:35.800545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.497 qpair failed and we were unable to recover it. 00:29:16.497 [2024-07-26 11:17:35.810390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.497 [2024-07-26 11:17:35.810541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.497 [2024-07-26 11:17:35.810558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.497 [2024-07-26 11:17:35.810565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.497 [2024-07-26 11:17:35.810571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.497 [2024-07-26 11:17:35.810588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.497 qpair failed and we were unable to recover it. 00:29:16.497 [2024-07-26 11:17:35.820393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.497 [2024-07-26 11:17:35.820545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.497 [2024-07-26 11:17:35.820562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.497 [2024-07-26 11:17:35.820569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.497 [2024-07-26 11:17:35.820575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.497 [2024-07-26 11:17:35.820593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.497 qpair failed and we were unable to recover it. 00:29:16.497 [2024-07-26 11:17:35.830447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.497 [2024-07-26 11:17:35.830594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.497 [2024-07-26 11:17:35.830612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.497 [2024-07-26 11:17:35.830619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.497 [2024-07-26 11:17:35.830625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.497 [2024-07-26 11:17:35.830642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.497 qpair failed and we were unable to recover it. 00:29:16.497 [2024-07-26 11:17:35.840469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.497 [2024-07-26 11:17:35.840619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.497 [2024-07-26 11:17:35.840636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.497 [2024-07-26 11:17:35.840643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.497 [2024-07-26 11:17:35.840649] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.497 [2024-07-26 11:17:35.840667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.497 qpair failed and we were unable to recover it. 00:29:16.497 [2024-07-26 11:17:35.850503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.497 [2024-07-26 11:17:35.850658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.497 [2024-07-26 11:17:35.850676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.497 [2024-07-26 11:17:35.850683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.497 [2024-07-26 11:17:35.850689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.497 [2024-07-26 11:17:35.850706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.497 qpair failed and we were unable to recover it. 00:29:16.497 [2024-07-26 11:17:35.860525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.497 [2024-07-26 11:17:35.860672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.497 [2024-07-26 11:17:35.860690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.497 [2024-07-26 11:17:35.860697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.497 [2024-07-26 11:17:35.860704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.497 [2024-07-26 11:17:35.860721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.497 qpair failed and we were unable to recover it. 00:29:16.497 [2024-07-26 11:17:35.870809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.497 [2024-07-26 11:17:35.870963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.497 [2024-07-26 11:17:35.870980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.497 [2024-07-26 11:17:35.870991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.497 [2024-07-26 11:17:35.870997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.497 [2024-07-26 11:17:35.871014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.497 qpair failed and we were unable to recover it. 00:29:16.497 [2024-07-26 11:17:35.880579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.497 [2024-07-26 11:17:35.880728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.497 [2024-07-26 11:17:35.880745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.497 [2024-07-26 11:17:35.880753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.497 [2024-07-26 11:17:35.880759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.497 [2024-07-26 11:17:35.880777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.497 qpair failed and we were unable to recover it. 00:29:16.497 [2024-07-26 11:17:35.890600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.497 [2024-07-26 11:17:35.890753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.497 [2024-07-26 11:17:35.890771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.497 [2024-07-26 11:17:35.890779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.497 [2024-07-26 11:17:35.890785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.497 [2024-07-26 11:17:35.890801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.497 qpair failed and we were unable to recover it. 00:29:16.497 [2024-07-26 11:17:35.900639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.497 [2024-07-26 11:17:35.900788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.497 [2024-07-26 11:17:35.900806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.497 [2024-07-26 11:17:35.900813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.497 [2024-07-26 11:17:35.900819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.497 [2024-07-26 11:17:35.900836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.497 qpair failed and we were unable to recover it. 00:29:16.497 [2024-07-26 11:17:35.910651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.497 [2024-07-26 11:17:35.910801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.497 [2024-07-26 11:17:35.910819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.497 [2024-07-26 11:17:35.910826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.497 [2024-07-26 11:17:35.910831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.497 [2024-07-26 11:17:35.910849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.497 qpair failed and we were unable to recover it. 00:29:16.497 [2024-07-26 11:17:35.920696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.497 [2024-07-26 11:17:35.920849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.497 [2024-07-26 11:17:35.920867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.497 [2024-07-26 11:17:35.920874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.498 [2024-07-26 11:17:35.920880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.498 [2024-07-26 11:17:35.920897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.498 qpair failed and we were unable to recover it. 00:29:16.498 [2024-07-26 11:17:35.930644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.498 [2024-07-26 11:17:35.930788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.498 [2024-07-26 11:17:35.930805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.498 [2024-07-26 11:17:35.930812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.498 [2024-07-26 11:17:35.930819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.498 [2024-07-26 11:17:35.930837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.498 qpair failed and we were unable to recover it. 00:29:16.498 [2024-07-26 11:17:35.940743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.498 [2024-07-26 11:17:35.940894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.498 [2024-07-26 11:17:35.940912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.498 [2024-07-26 11:17:35.940919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.498 [2024-07-26 11:17:35.940924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.498 [2024-07-26 11:17:35.940942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.498 qpair failed and we were unable to recover it. 00:29:16.498 [2024-07-26 11:17:35.950802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.498 [2024-07-26 11:17:35.950965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.498 [2024-07-26 11:17:35.950983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.498 [2024-07-26 11:17:35.950990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.498 [2024-07-26 11:17:35.950996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.498 [2024-07-26 11:17:35.951013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.498 qpair failed and we were unable to recover it. 00:29:16.498 [2024-07-26 11:17:35.960793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.498 [2024-07-26 11:17:35.960942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.498 [2024-07-26 11:17:35.960960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.498 [2024-07-26 11:17:35.960970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.498 [2024-07-26 11:17:35.960977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.498 [2024-07-26 11:17:35.960994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.498 qpair failed and we were unable to recover it. 00:29:16.498 [2024-07-26 11:17:35.970838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.498 [2024-07-26 11:17:35.970995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.498 [2024-07-26 11:17:35.971012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.498 [2024-07-26 11:17:35.971019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.498 [2024-07-26 11:17:35.971025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.498 [2024-07-26 11:17:35.971054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.498 qpair failed and we were unable to recover it. 00:29:16.498 [2024-07-26 11:17:35.980863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.498 [2024-07-26 11:17:35.981009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.498 [2024-07-26 11:17:35.981026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.498 [2024-07-26 11:17:35.981033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.498 [2024-07-26 11:17:35.981039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.498 [2024-07-26 11:17:35.981063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.498 qpair failed and we were unable to recover it. 00:29:16.760 [2024-07-26 11:17:35.990896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.761 [2024-07-26 11:17:35.991057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.761 [2024-07-26 11:17:35.991075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.761 [2024-07-26 11:17:35.991083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.761 [2024-07-26 11:17:35.991089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.761 [2024-07-26 11:17:35.991107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.761 qpair failed and we were unable to recover it. 00:29:16.761 [2024-07-26 11:17:36.000958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.761 [2024-07-26 11:17:36.001143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.761 [2024-07-26 11:17:36.001161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.761 [2024-07-26 11:17:36.001168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.761 [2024-07-26 11:17:36.001174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.761 [2024-07-26 11:17:36.001191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.761 qpair failed and we were unable to recover it. 00:29:16.761 [2024-07-26 11:17:36.010988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.761 [2024-07-26 11:17:36.011173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.761 [2024-07-26 11:17:36.011191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.761 [2024-07-26 11:17:36.011198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.761 [2024-07-26 11:17:36.011204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.761 [2024-07-26 11:17:36.011220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.761 qpair failed and we were unable to recover it. 00:29:16.761 [2024-07-26 11:17:36.020988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.761 [2024-07-26 11:17:36.021146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.761 [2024-07-26 11:17:36.021164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.761 [2024-07-26 11:17:36.021171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.761 [2024-07-26 11:17:36.021177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.761 [2024-07-26 11:17:36.021194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.761 qpair failed and we were unable to recover it. 00:29:16.761 [2024-07-26 11:17:36.031020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.761 [2024-07-26 11:17:36.031172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.761 [2024-07-26 11:17:36.031189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.761 [2024-07-26 11:17:36.031196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.761 [2024-07-26 11:17:36.031202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.761 [2024-07-26 11:17:36.031220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.761 qpair failed and we were unable to recover it. 00:29:16.761 [2024-07-26 11:17:36.041054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.761 [2024-07-26 11:17:36.041204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.761 [2024-07-26 11:17:36.041221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.761 [2024-07-26 11:17:36.041228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.761 [2024-07-26 11:17:36.041234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.761 [2024-07-26 11:17:36.041252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.761 qpair failed and we were unable to recover it. 00:29:16.761 [2024-07-26 11:17:36.051077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.761 [2024-07-26 11:17:36.051259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.761 [2024-07-26 11:17:36.051280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.761 [2024-07-26 11:17:36.051287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.761 [2024-07-26 11:17:36.051293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.761 [2024-07-26 11:17:36.051310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.761 qpair failed and we were unable to recover it. 00:29:16.761 [2024-07-26 11:17:36.061105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.761 [2024-07-26 11:17:36.061256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.761 [2024-07-26 11:17:36.061274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.761 [2024-07-26 11:17:36.061280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.761 [2024-07-26 11:17:36.061286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.761 [2024-07-26 11:17:36.061304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.761 qpair failed and we were unable to recover it. 00:29:16.761 [2024-07-26 11:17:36.071131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.761 [2024-07-26 11:17:36.071282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.761 [2024-07-26 11:17:36.071300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.761 [2024-07-26 11:17:36.071307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.761 [2024-07-26 11:17:36.071313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.761 [2024-07-26 11:17:36.071331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.761 qpair failed and we were unable to recover it. 00:29:16.761 [2024-07-26 11:17:36.081154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.761 [2024-07-26 11:17:36.081436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.761 [2024-07-26 11:17:36.081453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.761 [2024-07-26 11:17:36.081461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.761 [2024-07-26 11:17:36.081468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.761 [2024-07-26 11:17:36.081485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.761 qpair failed and we were unable to recover it. 00:29:16.761 [2024-07-26 11:17:36.091154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.761 [2024-07-26 11:17:36.091311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.761 [2024-07-26 11:17:36.091329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.761 [2024-07-26 11:17:36.091337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.761 [2024-07-26 11:17:36.091343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.761 [2024-07-26 11:17:36.091364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.761 qpair failed and we were unable to recover it. 00:29:16.761 [2024-07-26 11:17:36.101214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.761 [2024-07-26 11:17:36.101363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.761 [2024-07-26 11:17:36.101380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.761 [2024-07-26 11:17:36.101388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.761 [2024-07-26 11:17:36.101393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.761 [2024-07-26 11:17:36.101411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.761 qpair failed and we were unable to recover it. 00:29:16.761 [2024-07-26 11:17:36.111234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.761 [2024-07-26 11:17:36.111390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.761 [2024-07-26 11:17:36.111408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.761 [2024-07-26 11:17:36.111415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.761 [2024-07-26 11:17:36.111421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.761 [2024-07-26 11:17:36.111439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.761 qpair failed and we were unable to recover it. 00:29:16.761 [2024-07-26 11:17:36.121277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.762 [2024-07-26 11:17:36.121431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.762 [2024-07-26 11:17:36.121449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.762 [2024-07-26 11:17:36.121456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.762 [2024-07-26 11:17:36.121462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.762 [2024-07-26 11:17:36.121479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.762 qpair failed and we were unable to recover it. 00:29:16.762 [2024-07-26 11:17:36.131275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.762 [2024-07-26 11:17:36.131425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.762 [2024-07-26 11:17:36.131442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.762 [2024-07-26 11:17:36.131449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.762 [2024-07-26 11:17:36.131455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.762 [2024-07-26 11:17:36.131473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.762 qpair failed and we were unable to recover it. 00:29:16.762 [2024-07-26 11:17:36.141311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.762 [2024-07-26 11:17:36.141683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.762 [2024-07-26 11:17:36.141704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.762 [2024-07-26 11:17:36.141711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.762 [2024-07-26 11:17:36.141717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.762 [2024-07-26 11:17:36.141734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.762 qpair failed and we were unable to recover it. 00:29:16.762 [2024-07-26 11:17:36.151610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.762 [2024-07-26 11:17:36.151774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.762 [2024-07-26 11:17:36.151791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.762 [2024-07-26 11:17:36.151798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.762 [2024-07-26 11:17:36.151804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.762 [2024-07-26 11:17:36.151821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.762 qpair failed and we were unable to recover it. 00:29:16.762 [2024-07-26 11:17:36.161379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.762 [2024-07-26 11:17:36.161528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.762 [2024-07-26 11:17:36.161546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.762 [2024-07-26 11:17:36.161553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.762 [2024-07-26 11:17:36.161559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.762 [2024-07-26 11:17:36.161576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.762 qpair failed and we were unable to recover it. 00:29:16.762 [2024-07-26 11:17:36.171428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.762 [2024-07-26 11:17:36.171590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.762 [2024-07-26 11:17:36.171607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.762 [2024-07-26 11:17:36.171614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.762 [2024-07-26 11:17:36.171620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.762 [2024-07-26 11:17:36.171638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.762 qpair failed and we were unable to recover it. 00:29:16.762 [2024-07-26 11:17:36.181415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.762 [2024-07-26 11:17:36.181563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.762 [2024-07-26 11:17:36.181580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.762 [2024-07-26 11:17:36.181587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.762 [2024-07-26 11:17:36.181596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.762 [2024-07-26 11:17:36.181613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.762 qpair failed and we were unable to recover it. 00:29:16.762 [2024-07-26 11:17:36.191467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.762 [2024-07-26 11:17:36.191617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.762 [2024-07-26 11:17:36.191635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.762 [2024-07-26 11:17:36.191642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.762 [2024-07-26 11:17:36.191648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.762 [2024-07-26 11:17:36.191665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.762 qpair failed and we were unable to recover it. 00:29:16.762 [2024-07-26 11:17:36.201499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.762 [2024-07-26 11:17:36.201648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.762 [2024-07-26 11:17:36.201666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.762 [2024-07-26 11:17:36.201672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.762 [2024-07-26 11:17:36.201679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.762 [2024-07-26 11:17:36.201696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.762 qpair failed and we were unable to recover it. 00:29:16.762 [2024-07-26 11:17:36.211532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.762 [2024-07-26 11:17:36.211680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.762 [2024-07-26 11:17:36.211698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.762 [2024-07-26 11:17:36.211705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.762 [2024-07-26 11:17:36.211711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.762 [2024-07-26 11:17:36.211728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.762 qpair failed and we were unable to recover it. 00:29:16.762 [2024-07-26 11:17:36.221552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.762 [2024-07-26 11:17:36.221703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.762 [2024-07-26 11:17:36.221720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.762 [2024-07-26 11:17:36.221728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.762 [2024-07-26 11:17:36.221736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.762 [2024-07-26 11:17:36.221753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.762 qpair failed and we were unable to recover it. 00:29:16.762 [2024-07-26 11:17:36.231578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.762 [2024-07-26 11:17:36.231730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.762 [2024-07-26 11:17:36.231747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.762 [2024-07-26 11:17:36.231754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.762 [2024-07-26 11:17:36.231761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.762 [2024-07-26 11:17:36.231778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.762 qpair failed and we were unable to recover it. 00:29:16.762 [2024-07-26 11:17:36.241591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.762 [2024-07-26 11:17:36.241742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.762 [2024-07-26 11:17:36.241760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.762 [2024-07-26 11:17:36.241767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.762 [2024-07-26 11:17:36.241773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.762 [2024-07-26 11:17:36.241791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.762 qpair failed and we were unable to recover it. 00:29:16.762 [2024-07-26 11:17:36.251635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.763 [2024-07-26 11:17:36.251788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.763 [2024-07-26 11:17:36.251806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.763 [2024-07-26 11:17:36.251813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.763 [2024-07-26 11:17:36.251819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:16.763 [2024-07-26 11:17:36.251836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:16.763 qpair failed and we were unable to recover it. 00:29:17.025 [2024-07-26 11:17:36.261723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.025 [2024-07-26 11:17:36.261875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.025 [2024-07-26 11:17:36.261893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.025 [2024-07-26 11:17:36.261900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.025 [2024-07-26 11:17:36.261907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:17.025 [2024-07-26 11:17:36.261925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.025 qpair failed and we were unable to recover it. 00:29:17.025 [2024-07-26 11:17:36.271704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.025 [2024-07-26 11:17:36.271852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.025 [2024-07-26 11:17:36.271870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.025 [2024-07-26 11:17:36.271881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.025 [2024-07-26 11:17:36.271887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:17.025 [2024-07-26 11:17:36.271904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.025 qpair failed and we were unable to recover it. 00:29:17.025 [2024-07-26 11:17:36.281714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.025 [2024-07-26 11:17:36.281860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.025 [2024-07-26 11:17:36.281878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.025 [2024-07-26 11:17:36.281885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.025 [2024-07-26 11:17:36.281891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:17.025 [2024-07-26 11:17:36.281908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.025 qpair failed and we were unable to recover it. 00:29:17.025 [2024-07-26 11:17:36.291680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.025 [2024-07-26 11:17:36.291843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.025 [2024-07-26 11:17:36.291860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.025 [2024-07-26 11:17:36.291868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.025 [2024-07-26 11:17:36.291874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:17.025 [2024-07-26 11:17:36.291891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.025 qpair failed and we were unable to recover it. 00:29:17.025 [2024-07-26 11:17:36.301782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.025 [2024-07-26 11:17:36.301933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.025 [2024-07-26 11:17:36.301951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.025 [2024-07-26 11:17:36.301958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.025 [2024-07-26 11:17:36.301964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:17.025 [2024-07-26 11:17:36.301981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.025 qpair failed and we were unable to recover it. 00:29:17.025 [2024-07-26 11:17:36.311818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.025 [2024-07-26 11:17:36.311968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.025 [2024-07-26 11:17:36.311986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.025 [2024-07-26 11:17:36.311993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.025 [2024-07-26 11:17:36.311999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:17.025 [2024-07-26 11:17:36.312016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.025 qpair failed and we were unable to recover it. 00:29:17.025 [2024-07-26 11:17:36.321912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.025 [2024-07-26 11:17:36.322077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.025 [2024-07-26 11:17:36.322095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.025 [2024-07-26 11:17:36.322101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.025 [2024-07-26 11:17:36.322107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:17.025 [2024-07-26 11:17:36.322125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.025 qpair failed and we were unable to recover it. 00:29:17.025 [2024-07-26 11:17:36.331863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.025 [2024-07-26 11:17:36.332018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.025 [2024-07-26 11:17:36.332038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.025 [2024-07-26 11:17:36.332051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.025 [2024-07-26 11:17:36.332058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:17.025 [2024-07-26 11:17:36.332075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.025 qpair failed and we were unable to recover it. 00:29:17.025 [2024-07-26 11:17:36.342144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.025 [2024-07-26 11:17:36.342291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.025 [2024-07-26 11:17:36.342309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.025 [2024-07-26 11:17:36.342316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.025 [2024-07-26 11:17:36.342322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:17.025 [2024-07-26 11:17:36.342340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.025 qpair failed and we were unable to recover it. 00:29:17.025 [2024-07-26 11:17:36.351955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.025 [2024-07-26 11:17:36.352116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.025 [2024-07-26 11:17:36.352134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.025 [2024-07-26 11:17:36.352140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.025 [2024-07-26 11:17:36.352146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:17.025 [2024-07-26 11:17:36.352163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.025 qpair failed and we were unable to recover it. 00:29:17.025 [2024-07-26 11:17:36.361951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.025 [2024-07-26 11:17:36.362111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.025 [2024-07-26 11:17:36.362129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.025 [2024-07-26 11:17:36.362140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.025 [2024-07-26 11:17:36.362147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:17.025 [2024-07-26 11:17:36.362164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.026 qpair failed and we were unable to recover it. 00:29:17.026 [2024-07-26 11:17:36.371979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.026 [2024-07-26 11:17:36.372139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.026 [2024-07-26 11:17:36.372157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.026 [2024-07-26 11:17:36.372164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.026 [2024-07-26 11:17:36.372170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:17.026 [2024-07-26 11:17:36.372187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.026 qpair failed and we were unable to recover it. 00:29:17.026 [2024-07-26 11:17:36.381993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.026 [2024-07-26 11:17:36.382142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.026 [2024-07-26 11:17:36.382160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.026 [2024-07-26 11:17:36.382167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.026 [2024-07-26 11:17:36.382173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:17.026 [2024-07-26 11:17:36.382191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.026 qpair failed and we were unable to recover it. 00:29:17.026 [2024-07-26 11:17:36.392028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.026 [2024-07-26 11:17:36.392186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.026 [2024-07-26 11:17:36.392204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.026 [2024-07-26 11:17:36.392211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.026 [2024-07-26 11:17:36.392217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:17.026 [2024-07-26 11:17:36.392235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.026 qpair failed and we were unable to recover it. 00:29:17.026 [2024-07-26 11:17:36.402070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.026 [2024-07-26 11:17:36.402218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.026 [2024-07-26 11:17:36.402236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.026 [2024-07-26 11:17:36.402244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.026 [2024-07-26 11:17:36.402250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:17.026 [2024-07-26 11:17:36.402267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.026 qpair failed and we were unable to recover it. 00:29:17.026 [2024-07-26 11:17:36.412081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.026 [2024-07-26 11:17:36.412232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.026 [2024-07-26 11:17:36.412250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.026 [2024-07-26 11:17:36.412257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.026 [2024-07-26 11:17:36.412263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:17.026 [2024-07-26 11:17:36.412280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.026 qpair failed and we were unable to recover it. 00:29:17.026 [2024-07-26 11:17:36.422177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.026 [2024-07-26 11:17:36.422329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.026 [2024-07-26 11:17:36.422347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.026 [2024-07-26 11:17:36.422354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.026 [2024-07-26 11:17:36.422360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:17.026 [2024-07-26 11:17:36.422377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.026 qpair failed and we were unable to recover it. 00:29:17.026 [2024-07-26 11:17:36.432122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.026 [2024-07-26 11:17:36.432271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.026 [2024-07-26 11:17:36.432289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.026 [2024-07-26 11:17:36.432296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.026 [2024-07-26 11:17:36.432302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:17.026 [2024-07-26 11:17:36.432319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.026 qpair failed and we were unable to recover it. 00:29:17.026 [2024-07-26 11:17:36.442164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.026 [2024-07-26 11:17:36.442316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.026 [2024-07-26 11:17:36.442334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.026 [2024-07-26 11:17:36.442341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.026 [2024-07-26 11:17:36.442347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:17.026 [2024-07-26 11:17:36.442364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.026 qpair failed and we were unable to recover it. 00:29:17.026 [2024-07-26 11:17:36.452194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.026 [2024-07-26 11:17:36.452341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.026 [2024-07-26 11:17:36.452362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.026 [2024-07-26 11:17:36.452369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.026 [2024-07-26 11:17:36.452375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:17.026 [2024-07-26 11:17:36.452392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.026 qpair failed and we were unable to recover it. 00:29:17.026 [2024-07-26 11:17:36.462200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.026 [2024-07-26 11:17:36.462348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.026 [2024-07-26 11:17:36.462366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.026 [2024-07-26 11:17:36.462373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.026 [2024-07-26 11:17:36.462379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:17.026 [2024-07-26 11:17:36.462397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.026 qpair failed and we were unable to recover it. 00:29:17.026 [2024-07-26 11:17:36.472254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.026 [2024-07-26 11:17:36.472405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.026 [2024-07-26 11:17:36.472423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.026 [2024-07-26 11:17:36.472430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.026 [2024-07-26 11:17:36.472436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:17.026 [2024-07-26 11:17:36.472454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.026 qpair failed and we were unable to recover it. 00:29:17.026 [2024-07-26 11:17:36.482281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.026 [2024-07-26 11:17:36.482430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.026 [2024-07-26 11:17:36.482448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.026 [2024-07-26 11:17:36.482454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.026 [2024-07-26 11:17:36.482461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:17.026 [2024-07-26 11:17:36.482478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.026 qpair failed and we were unable to recover it. 00:29:17.026 [2024-07-26 11:17:36.492292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.027 [2024-07-26 11:17:36.492442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.027 [2024-07-26 11:17:36.492460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.027 [2024-07-26 11:17:36.492467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.027 [2024-07-26 11:17:36.492473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:17.027 [2024-07-26 11:17:36.492493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.027 qpair failed and we were unable to recover it. 00:29:17.027 [2024-07-26 11:17:36.502347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.027 [2024-07-26 11:17:36.502495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.027 [2024-07-26 11:17:36.502512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.027 [2024-07-26 11:17:36.502519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.027 [2024-07-26 11:17:36.502525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:17.027 [2024-07-26 11:17:36.502542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.027 qpair failed and we were unable to recover it. 00:29:17.027 [2024-07-26 11:17:36.512388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.027 [2024-07-26 11:17:36.512539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.027 [2024-07-26 11:17:36.512557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.027 [2024-07-26 11:17:36.512564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.027 [2024-07-26 11:17:36.512570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:17.027 [2024-07-26 11:17:36.512586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.027 qpair failed and we were unable to recover it. 00:29:17.290 [2024-07-26 11:17:36.522401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.290 [2024-07-26 11:17:36.522553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.290 [2024-07-26 11:17:36.522571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.290 [2024-07-26 11:17:36.522579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.290 [2024-07-26 11:17:36.522586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:17.291 [2024-07-26 11:17:36.522603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.291 qpair failed and we were unable to recover it. 00:29:17.291 [2024-07-26 11:17:36.532427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.291 [2024-07-26 11:17:36.532577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.291 [2024-07-26 11:17:36.532595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.291 [2024-07-26 11:17:36.532602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.291 [2024-07-26 11:17:36.532610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:17.291 [2024-07-26 11:17:36.532628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.291 qpair failed and we were unable to recover it. 00:29:17.291 [2024-07-26 11:17:36.542434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.291 [2024-07-26 11:17:36.542584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.291 [2024-07-26 11:17:36.542606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.291 [2024-07-26 11:17:36.542613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.291 [2024-07-26 11:17:36.542621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:17.291 [2024-07-26 11:17:36.542638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.291 qpair failed and we were unable to recover it. 00:29:17.291 [2024-07-26 11:17:36.552411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.291 [2024-07-26 11:17:36.552558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.291 [2024-07-26 11:17:36.552576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.291 [2024-07-26 11:17:36.552583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.291 [2024-07-26 11:17:36.552589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:17.291 [2024-07-26 11:17:36.552607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.291 qpair failed and we were unable to recover it. 00:29:17.291 [2024-07-26 11:17:36.562483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.291 [2024-07-26 11:17:36.562631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.291 [2024-07-26 11:17:36.562648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.291 [2024-07-26 11:17:36.562656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.291 [2024-07-26 11:17:36.562662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:17.291 [2024-07-26 11:17:36.562679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.291 qpair failed and we were unable to recover it. 00:29:17.291 [2024-07-26 11:17:36.572474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.291 [2024-07-26 11:17:36.572637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.291 [2024-07-26 11:17:36.572655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.291 [2024-07-26 11:17:36.572662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.291 [2024-07-26 11:17:36.572668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:17.291 [2024-07-26 11:17:36.572686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.291 qpair failed and we were unable to recover it. 00:29:17.291 [2024-07-26 11:17:36.582542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.291 [2024-07-26 11:17:36.582696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.291 [2024-07-26 11:17:36.582714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.291 [2024-07-26 11:17:36.582721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.291 [2024-07-26 11:17:36.582731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:17.291 [2024-07-26 11:17:36.582749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.291 qpair failed and we were unable to recover it. 00:29:17.291 [2024-07-26 11:17:36.592612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.291 [2024-07-26 11:17:36.592765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.291 [2024-07-26 11:17:36.592782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.291 [2024-07-26 11:17:36.592789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.291 [2024-07-26 11:17:36.592795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:17.291 [2024-07-26 11:17:36.592813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.291 qpair failed and we were unable to recover it. 00:29:17.291 [2024-07-26 11:17:36.602541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.291 [2024-07-26 11:17:36.602693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.291 [2024-07-26 11:17:36.602711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.291 [2024-07-26 11:17:36.602718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.291 [2024-07-26 11:17:36.602724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:17.291 [2024-07-26 11:17:36.602742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.291 qpair failed and we were unable to recover it. 00:29:17.291 [2024-07-26 11:17:36.612651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.291 [2024-07-26 11:17:36.612797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.291 [2024-07-26 11:17:36.612815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.291 [2024-07-26 11:17:36.612821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.291 [2024-07-26 11:17:36.612828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:17.291 [2024-07-26 11:17:36.612845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.291 qpair failed and we were unable to recover it. 00:29:17.291 [2024-07-26 11:17:36.622667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.291 [2024-07-26 11:17:36.622820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.291 [2024-07-26 11:17:36.622838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.291 [2024-07-26 11:17:36.622846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.291 [2024-07-26 11:17:36.622853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:17.291 [2024-07-26 11:17:36.622870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.291 qpair failed and we were unable to recover it. 00:29:17.291 [2024-07-26 11:17:36.632715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.291 [2024-07-26 11:17:36.632900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.291 [2024-07-26 11:17:36.632918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.291 [2024-07-26 11:17:36.632924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.291 [2024-07-26 11:17:36.632930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:17.291 [2024-07-26 11:17:36.632948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.292 qpair failed and we were unable to recover it. 00:29:17.292 [2024-07-26 11:17:36.642706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.292 [2024-07-26 11:17:36.642868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.292 [2024-07-26 11:17:36.642886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.292 [2024-07-26 11:17:36.642893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.292 [2024-07-26 11:17:36.642899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:17.292 [2024-07-26 11:17:36.642916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.292 qpair failed and we were unable to recover it. 00:29:17.292 [2024-07-26 11:17:36.652736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.292 [2024-07-26 11:17:36.652886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.292 [2024-07-26 11:17:36.652904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.292 [2024-07-26 11:17:36.652911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.292 [2024-07-26 11:17:36.652917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:17.292 [2024-07-26 11:17:36.652934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.292 qpair failed and we were unable to recover it. 00:29:17.292 [2024-07-26 11:17:36.662788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.292 [2024-07-26 11:17:36.662940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.292 [2024-07-26 11:17:36.662958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.292 [2024-07-26 11:17:36.662965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.292 [2024-07-26 11:17:36.662971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:17.292 [2024-07-26 11:17:36.662989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.292 qpair failed and we were unable to recover it. 00:29:17.292 [2024-07-26 11:17:36.672803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.292 [2024-07-26 11:17:36.672958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.292 [2024-07-26 11:17:36.672976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.292 [2024-07-26 11:17:36.672983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.292 [2024-07-26 11:17:36.672996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:17.292 [2024-07-26 11:17:36.673012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.292 qpair failed and we were unable to recover it. 00:29:17.292 [2024-07-26 11:17:36.682834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.292 [2024-07-26 11:17:36.682988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.292 [2024-07-26 11:17:36.683005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.292 [2024-07-26 11:17:36.683013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.292 [2024-07-26 11:17:36.683019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:17.292 [2024-07-26 11:17:36.683036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.292 qpair failed and we were unable to recover it. 00:29:17.292 [2024-07-26 11:17:36.692874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.292 [2024-07-26 11:17:36.693025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.292 [2024-07-26 11:17:36.693048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.292 [2024-07-26 11:17:36.693057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.292 [2024-07-26 11:17:36.693064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:17.292 [2024-07-26 11:17:36.693080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.292 qpair failed and we were unable to recover it. 00:29:17.292 [2024-07-26 11:17:36.702935] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.292 [2024-07-26 11:17:36.703090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.292 [2024-07-26 11:17:36.703108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.292 [2024-07-26 11:17:36.703115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.292 [2024-07-26 11:17:36.703121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3448000b90 00:29:17.292 [2024-07-26 11:17:36.703138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.292 qpair failed and we were unable to recover it. 00:29:17.292 [2024-07-26 11:17:36.713014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.292 [2024-07-26 11:17:36.713220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.292 [2024-07-26 11:17:36.713249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.292 [2024-07-26 11:17:36.713263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.292 [2024-07-26 11:17:36.713273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3438000b90 00:29:17.292 [2024-07-26 11:17:36.713299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:17.292 qpair failed and we were unable to recover it. 00:29:17.292 [2024-07-26 11:17:36.722935] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.292 [2024-07-26 11:17:36.723099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.292 [2024-07-26 11:17:36.723119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.292 [2024-07-26 11:17:36.723126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.292 [2024-07-26 11:17:36.723134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3438000b90 00:29:17.292 [2024-07-26 11:17:36.723153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:17.292 qpair failed and we were unable to recover it. 00:29:17.292 [2024-07-26 11:17:36.732965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.292 [2024-07-26 11:17:36.733122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.292 [2024-07-26 11:17:36.733141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.292 [2024-07-26 11:17:36.733148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.292 [2024-07-26 11:17:36.733155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3438000b90 00:29:17.292 [2024-07-26 11:17:36.733172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:17.292 qpair failed and we were unable to recover it. 00:29:17.292 [2024-07-26 11:17:36.733469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d08ff0 is same with the state(5) to be set 00:29:17.292 [2024-07-26 11:17:36.742953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.292 [2024-07-26 11:17:36.743118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.292 [2024-07-26 11:17:36.743143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.292 [2024-07-26 11:17:36.743151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.292 [2024-07-26 11:17:36.743159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3440000b90 00:29:17.292 [2024-07-26 11:17:36.743179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.292 qpair failed and we were unable to recover it. 00:29:17.292 [2024-07-26 11:17:36.753018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.292 [2024-07-26 11:17:36.753177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.293 [2024-07-26 11:17:36.753196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.293 [2024-07-26 11:17:36.753204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.293 [2024-07-26 11:17:36.753211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f3440000b90 00:29:17.293 [2024-07-26 11:17:36.753229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.293 qpair failed and we were unable to recover it. 00:29:17.293 [2024-07-26 11:17:36.763062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.293 [2024-07-26 11:17:36.763267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.293 [2024-07-26 11:17:36.763301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.293 [2024-07-26 11:17:36.763312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.293 [2024-07-26 11:17:36.763322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:17.293 [2024-07-26 11:17:36.763347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.293 qpair failed and we were unable to recover it. 00:29:17.293 [2024-07-26 11:17:36.773070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.293 [2024-07-26 11:17:36.773225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.293 [2024-07-26 11:17:36.773244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.293 [2024-07-26 11:17:36.773251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.293 [2024-07-26 11:17:36.773258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1cfaf30 00:29:17.293 [2024-07-26 11:17:36.773275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.293 qpair failed and we were unable to recover it. 00:29:17.293 [2024-07-26 11:17:36.773548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d08ff0 (9): Bad file descriptor 00:29:17.293 Initializing NVMe Controllers 00:29:17.293 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:17.293 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:17.293 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:17.293 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:17.293 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:17.293 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:17.293 Initialization complete. Launching workers. 00:29:17.293 Starting thread on core 1 00:29:17.293 Starting thread on core 2 00:29:17.293 Starting thread on core 3 00:29:17.293 Starting thread on core 0 00:29:17.293 11:17:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:17.584 00:29:17.584 real 0m11.253s 00:29:17.584 user 0m20.379s 00:29:17.584 sys 0m4.383s 00:29:17.584 11:17:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:17.584 11:17:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:17.584 ************************************ 00:29:17.584 END TEST nvmf_target_disconnect_tc2 00:29:17.584 ************************************ 00:29:17.584 11:17:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:17.585 11:17:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:17.585 11:17:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:17.585 11:17:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:17.585 11:17:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:29:17.585 11:17:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:17.585 11:17:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:29:17.585 11:17:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:17.585 11:17:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:17.585 rmmod nvme_tcp 00:29:17.585 rmmod nvme_fabrics 00:29:17.585 rmmod nvme_keyring 00:29:17.585 11:17:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:17.585 11:17:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:29:17.585 11:17:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:29:17.585 11:17:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1612784 ']' 00:29:17.585 11:17:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1612784 00:29:17.585 11:17:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1612784 ']' 00:29:17.585 11:17:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 1612784 00:29:17.585 11:17:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:29:17.585 11:17:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:17.585 11:17:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1612784 00:29:17.585 11:17:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:29:17.585 11:17:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:29:17.585 11:17:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1612784' 00:29:17.585 killing process with pid 1612784 00:29:17.585 11:17:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 1612784 00:29:17.585 11:17:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 1612784 00:29:17.848 11:17:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:17.848 11:17:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:17.848 11:17:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:17.848 11:17:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:17.848 11:17:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:17.848 11:17:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:17.848 11:17:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:17.848 11:17:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:19.760 11:17:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:19.760 00:29:19.760 real 0m19.471s 00:29:19.760 user 0m47.486s 00:29:19.760 sys 0m8.916s 00:29:19.760 11:17:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:19.760 11:17:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:19.760 ************************************ 00:29:19.760 END TEST nvmf_target_disconnect 00:29:19.760 ************************************ 00:29:19.760 11:17:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:19.760 00:29:19.760 real 5m45.228s 00:29:19.760 user 10m53.242s 00:29:19.760 sys 1m44.401s 00:29:19.760 11:17:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:19.760 11:17:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.760 ************************************ 00:29:19.760 END TEST nvmf_host 00:29:19.760 ************************************ 00:29:20.020 00:29:20.020 real 21m0.140s 00:29:20.020 user 45m16.605s 00:29:20.020 sys 6m17.506s 00:29:20.021 11:17:39 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:20.021 11:17:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:20.021 ************************************ 00:29:20.021 END TEST nvmf_tcp 00:29:20.021 ************************************ 00:29:20.021 11:17:39 -- spdk/autotest.sh@292 -- # [[ 0 -eq 0 ]] 00:29:20.021 11:17:39 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:20.021 11:17:39 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:20.021 11:17:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:20.021 11:17:39 -- common/autotest_common.sh@10 -- # set +x 00:29:20.021 ************************************ 00:29:20.021 START TEST spdkcli_nvmf_tcp 00:29:20.021 ************************************ 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:20.021 * Looking for test storage... 00:29:20.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1614314 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1614314 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 1614314 ']' 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:20.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:20.021 11:17:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:20.021 [2024-07-26 11:17:39.494902] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:20.021 [2024-07-26 11:17:39.494953] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1614314 ] 00:29:20.281 EAL: No free 2048 kB hugepages reported on node 1 00:29:20.281 [2024-07-26 11:17:39.550867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:20.282 [2024-07-26 11:17:39.631091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:20.282 [2024-07-26 11:17:39.631094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.851 11:17:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:20.851 11:17:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:29:20.851 11:17:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:20.851 11:17:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:20.851 11:17:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:20.851 11:17:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:20.851 11:17:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:29:20.851 11:17:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:20.851 11:17:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:20.851 11:17:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:21.112 11:17:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:21.112 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:21.112 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:21.112 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:21.112 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:21.112 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:21.112 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:21.112 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:21.112 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:21.112 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:21.112 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:21.112 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:21.112 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:21.112 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:21.112 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:21.112 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:21.112 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:21.112 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:21.112 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:21.112 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:21.112 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:21.112 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:21.112 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:21.112 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:29:21.112 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:21.112 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:21.112 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:21.112 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:21.112 ' 00:29:23.653 [2024-07-26 11:17:42.730758] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:24.591 [2024-07-26 11:17:43.906679] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:29:27.132 [2024-07-26 11:17:46.077419] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:29:28.514 [2024-07-26 11:17:47.943285] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:29:29.897 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:29.897 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:29.897 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:29.897 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:29.897 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:29.897 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:29.897 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:29.897 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:29.897 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:29.897 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:29.897 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:29.897 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:29.897 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:29.897 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:29.897 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:29.897 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:29.897 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:29.897 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:29.897 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:29.897 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:29.897 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:29.897 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:29.897 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:29.897 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:29:29.897 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:29.897 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:29.897 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:29.897 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:30.156 11:17:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:30.156 11:17:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:30.156 11:17:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:30.156 11:17:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:30.156 11:17:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:30.156 11:17:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:30.156 11:17:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:29:30.156 11:17:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:29:30.417 11:17:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:30.677 11:17:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:30.677 11:17:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:30.677 11:17:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:30.677 11:17:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:30.677 11:17:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:30.677 11:17:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:30.677 11:17:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:30.677 11:17:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:30.677 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:30.677 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:30.677 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:30.677 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:29:30.677 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:29:30.677 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:30.677 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:30.677 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:30.677 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:30.677 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:30.677 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:30.677 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:30.677 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:30.677 ' 00:29:35.961 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:29:35.961 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:29:35.961 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:35.961 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:29:35.961 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:29:35.961 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:29:35.961 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:29:35.961 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:35.961 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:29:35.961 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:29:35.961 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:29:35.961 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:29:35.961 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:29:35.961 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:29:35.961 11:17:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:29:35.961 11:17:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:35.961 11:17:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:35.961 11:17:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1614314 00:29:35.961 11:17:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1614314 ']' 00:29:35.961 11:17:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1614314 00:29:35.961 11:17:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:29:35.961 11:17:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:35.961 11:17:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1614314 00:29:35.961 11:17:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:35.961 11:17:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:35.961 11:17:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1614314' 00:29:35.961 killing process with pid 1614314 00:29:35.961 11:17:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 1614314 00:29:35.961 11:17:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 1614314 00:29:35.961 11:17:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:29:35.961 11:17:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:29:35.961 11:17:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1614314 ']' 00:29:35.961 11:17:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1614314 00:29:35.961 11:17:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1614314 ']' 00:29:35.961 11:17:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1614314 00:29:35.961 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1614314) - No such process 00:29:35.961 11:17:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 1614314 is not found' 00:29:35.961 Process with pid 1614314 is not found 00:29:35.961 11:17:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:29:35.961 11:17:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:29:35.961 11:17:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:29:35.961 00:29:35.961 real 0m15.876s 00:29:35.961 user 0m32.923s 00:29:35.961 sys 0m0.726s 00:29:35.961 11:17:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:35.961 11:17:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:35.961 ************************************ 00:29:35.961 END TEST spdkcli_nvmf_tcp 00:29:35.961 ************************************ 00:29:35.961 11:17:55 -- spdk/autotest.sh@294 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:35.961 11:17:55 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:35.961 11:17:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:35.961 11:17:55 -- common/autotest_common.sh@10 -- # set +x 00:29:35.961 ************************************ 00:29:35.961 START TEST nvmf_identify_passthru 00:29:35.961 ************************************ 00:29:35.961 11:17:55 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:35.961 * Looking for test storage... 00:29:35.961 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:35.962 11:17:55 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:35.962 11:17:55 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:29:35.962 11:17:55 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:35.962 11:17:55 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:35.962 11:17:55 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:35.962 11:17:55 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:35.962 11:17:55 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:35.962 11:17:55 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:35.962 11:17:55 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:35.962 11:17:55 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:35.962 11:17:55 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:35.962 11:17:55 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:35.962 11:17:55 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:35.962 11:17:55 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:35.962 11:17:55 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:35.962 11:17:55 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:35.962 11:17:55 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:35.962 11:17:55 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:35.962 11:17:55 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:35.962 11:17:55 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:35.962 11:17:55 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:35.962 11:17:55 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:35.962 11:17:55 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.962 11:17:55 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.962 11:17:55 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.962 11:17:55 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:35.962 11:17:55 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.962 11:17:55 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:29:35.962 11:17:55 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:35.962 11:17:55 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:35.962 11:17:55 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:35.962 11:17:55 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:35.962 11:17:55 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:35.962 11:17:55 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:35.962 11:17:55 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:35.962 11:17:55 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:35.962 11:17:55 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:35.962 11:17:55 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:35.962 11:17:55 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:35.962 11:17:55 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:35.962 11:17:55 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.962 11:17:55 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.962 11:17:55 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.962 11:17:55 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:35.962 11:17:55 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.962 11:17:55 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:29:35.962 11:17:55 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:35.962 11:17:55 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:35.962 11:17:55 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:35.962 11:17:55 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:35.962 11:17:55 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:35.962 11:17:55 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:35.962 11:17:55 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:35.962 11:17:55 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:35.962 11:17:55 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:35.962 11:17:55 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:35.962 11:17:55 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:29:35.962 11:17:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:41.241 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:41.241 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:41.241 Found net devices under 0000:86:00.0: cvl_0_0 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:41.241 Found net devices under 0000:86:00.1: cvl_0_1 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:41.241 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:41.242 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:41.242 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:41.242 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:41.242 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:41.242 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:41.242 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:41.242 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:41.242 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:41.242 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:41.501 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:41.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:41.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:29:41.501 00:29:41.501 --- 10.0.0.2 ping statistics --- 00:29:41.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:41.501 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:29:41.501 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:41.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:41.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.395 ms 00:29:41.501 00:29:41.501 --- 10.0.0.1 ping statistics --- 00:29:41.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:41.501 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:29:41.501 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:41.501 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:29:41.501 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:41.501 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:41.501 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:41.501 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:41.501 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:41.501 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:41.501 11:18:00 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:41.501 11:18:00 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:29:41.501 11:18:00 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:41.501 11:18:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:41.501 11:18:00 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:29:41.501 11:18:00 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:29:41.501 11:18:00 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:29:41.501 11:18:00 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:29:41.501 11:18:00 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:29:41.501 11:18:00 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:29:41.501 11:18:00 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:29:41.501 11:18:00 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:41.501 11:18:00 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:41.501 11:18:00 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:29:41.501 11:18:00 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:29:41.501 11:18:00 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:29:41.501 11:18:00 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:5e:00.0 00:29:41.501 11:18:00 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:29:41.501 11:18:00 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:29:41.501 11:18:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:29:41.501 11:18:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:29:41.501 11:18:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:29:41.501 EAL: No free 2048 kB hugepages reported on node 1 00:29:45.696 11:18:05 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:29:45.696 11:18:05 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:29:45.696 11:18:05 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:29:45.696 11:18:05 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:29:45.696 EAL: No free 2048 kB hugepages reported on node 1 00:29:49.899 11:18:09 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:29:49.899 11:18:09 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:29:49.899 11:18:09 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:49.899 11:18:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:49.899 11:18:09 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:29:49.899 11:18:09 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:49.899 11:18:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:49.899 11:18:09 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1621721 00:29:49.899 11:18:09 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:49.899 11:18:09 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:49.899 11:18:09 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1621721 00:29:49.899 11:18:09 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 1621721 ']' 00:29:49.899 11:18:09 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:49.899 11:18:09 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:49.899 11:18:09 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:49.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:49.899 11:18:09 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:49.899 11:18:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:49.899 [2024-07-26 11:18:09.252430] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:49.899 [2024-07-26 11:18:09.252482] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:49.899 EAL: No free 2048 kB hugepages reported on node 1 00:29:49.899 [2024-07-26 11:18:09.312498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:50.159 [2024-07-26 11:18:09.399316] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:50.159 [2024-07-26 11:18:09.399350] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:50.159 [2024-07-26 11:18:09.399357] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:50.159 [2024-07-26 11:18:09.399364] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:50.159 [2024-07-26 11:18:09.399369] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:50.159 [2024-07-26 11:18:09.399413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:50.159 [2024-07-26 11:18:09.399519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:50.159 [2024-07-26 11:18:09.399715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:50.159 [2024-07-26 11:18:09.399717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:50.728 11:18:10 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:50.728 11:18:10 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:29:50.728 11:18:10 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:29:50.728 11:18:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.728 11:18:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:50.728 INFO: Log level set to 20 00:29:50.728 INFO: Requests: 00:29:50.728 { 00:29:50.728 "jsonrpc": "2.0", 00:29:50.728 "method": "nvmf_set_config", 00:29:50.728 "id": 1, 00:29:50.728 "params": { 00:29:50.728 "admin_cmd_passthru": { 00:29:50.728 "identify_ctrlr": true 00:29:50.728 } 00:29:50.728 } 00:29:50.728 } 00:29:50.728 00:29:50.728 INFO: response: 00:29:50.728 { 00:29:50.728 "jsonrpc": "2.0", 00:29:50.728 "id": 1, 00:29:50.728 "result": true 00:29:50.728 } 00:29:50.728 00:29:50.728 11:18:10 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.728 11:18:10 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:29:50.728 11:18:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.728 11:18:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:50.728 INFO: Setting log level to 20 00:29:50.728 INFO: Setting log level to 20 00:29:50.728 INFO: Log level set to 20 00:29:50.728 INFO: Log level set to 20 00:29:50.728 INFO: Requests: 00:29:50.728 { 00:29:50.728 "jsonrpc": "2.0", 00:29:50.728 "method": "framework_start_init", 00:29:50.728 "id": 1 00:29:50.728 } 00:29:50.728 00:29:50.728 INFO: Requests: 00:29:50.728 { 00:29:50.728 "jsonrpc": "2.0", 00:29:50.728 "method": "framework_start_init", 00:29:50.728 "id": 1 00:29:50.728 } 00:29:50.728 00:29:50.728 [2024-07-26 11:18:10.176514] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:29:50.728 INFO: response: 00:29:50.728 { 00:29:50.728 "jsonrpc": "2.0", 00:29:50.728 "id": 1, 00:29:50.728 "result": true 00:29:50.728 } 00:29:50.728 00:29:50.728 INFO: response: 00:29:50.728 { 00:29:50.728 "jsonrpc": "2.0", 00:29:50.728 "id": 1, 00:29:50.728 "result": true 00:29:50.728 } 00:29:50.728 00:29:50.728 11:18:10 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.728 11:18:10 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:50.728 11:18:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.728 11:18:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:50.728 INFO: Setting log level to 40 00:29:50.728 INFO: Setting log level to 40 00:29:50.728 INFO: Setting log level to 40 00:29:50.728 [2024-07-26 11:18:10.189911] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:50.728 11:18:10 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.728 11:18:10 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:29:50.728 11:18:10 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:50.728 11:18:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:51.000 11:18:10 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:29:51.000 11:18:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.000 11:18:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:53.606 Nvme0n1 00:29:53.606 11:18:13 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.606 11:18:13 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:29:53.606 11:18:13 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.606 11:18:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:53.606 11:18:13 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.606 11:18:13 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:53.606 11:18:13 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.606 11:18:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:53.606 11:18:13 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.606 11:18:13 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:53.606 11:18:13 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.606 11:18:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:53.606 [2024-07-26 11:18:13.087976] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:53.606 11:18:13 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.606 11:18:13 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:29:53.606 11:18:13 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.606 11:18:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:53.606 [ 00:29:53.606 { 00:29:53.606 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:53.606 "subtype": "Discovery", 00:29:53.606 "listen_addresses": [], 00:29:53.606 "allow_any_host": true, 00:29:53.606 "hosts": [] 00:29:53.606 }, 00:29:53.606 { 00:29:53.606 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:53.606 "subtype": "NVMe", 00:29:53.606 "listen_addresses": [ 00:29:53.606 { 00:29:53.867 "trtype": "TCP", 00:29:53.867 "adrfam": "IPv4", 00:29:53.867 "traddr": "10.0.0.2", 00:29:53.867 "trsvcid": "4420" 00:29:53.867 } 00:29:53.867 ], 00:29:53.867 "allow_any_host": true, 00:29:53.867 "hosts": [], 00:29:53.867 "serial_number": "SPDK00000000000001", 00:29:53.867 "model_number": "SPDK bdev Controller", 00:29:53.867 "max_namespaces": 1, 00:29:53.867 "min_cntlid": 1, 00:29:53.867 "max_cntlid": 65519, 00:29:53.867 "namespaces": [ 00:29:53.867 { 00:29:53.867 "nsid": 1, 00:29:53.867 "bdev_name": "Nvme0n1", 00:29:53.867 "name": "Nvme0n1", 00:29:53.867 "nguid": "5D4F0661F57B48629E7B212741B48C74", 00:29:53.867 "uuid": "5d4f0661-f57b-4862-9e7b-212741b48c74" 00:29:53.867 } 00:29:53.867 ] 00:29:53.867 } 00:29:53.867 ] 00:29:53.867 11:18:13 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.867 11:18:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:53.867 11:18:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:29:53.867 11:18:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:29:53.867 EAL: No free 2048 kB hugepages reported on node 1 00:29:53.867 11:18:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:29:53.867 11:18:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:53.867 11:18:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:29:53.867 11:18:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:29:53.867 EAL: No free 2048 kB hugepages reported on node 1 00:29:54.128 11:18:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:29:54.128 11:18:13 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:29:54.128 11:18:13 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:29:54.128 11:18:13 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:54.128 11:18:13 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:54.128 11:18:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:54.128 11:18:13 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:54.128 11:18:13 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:29:54.128 11:18:13 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:29:54.128 11:18:13 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:54.128 11:18:13 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:29:54.128 11:18:13 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:54.128 11:18:13 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:29:54.128 11:18:13 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:54.128 11:18:13 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:54.128 rmmod nvme_tcp 00:29:54.128 rmmod nvme_fabrics 00:29:54.128 rmmod nvme_keyring 00:29:54.128 11:18:13 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:54.128 11:18:13 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:29:54.128 11:18:13 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:29:54.128 11:18:13 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1621721 ']' 00:29:54.128 11:18:13 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1621721 00:29:54.128 11:18:13 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 1621721 ']' 00:29:54.128 11:18:13 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 1621721 00:29:54.128 11:18:13 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:29:54.128 11:18:13 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:54.128 11:18:13 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1621721 00:29:54.128 11:18:13 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:54.128 11:18:13 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:54.128 11:18:13 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1621721' 00:29:54.128 killing process with pid 1621721 00:29:54.128 11:18:13 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 1621721 00:29:54.128 11:18:13 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 1621721 00:29:56.039 11:18:15 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:56.039 11:18:15 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:56.039 11:18:15 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:56.039 11:18:15 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:56.039 11:18:15 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:56.039 11:18:15 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:56.039 11:18:15 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:56.039 11:18:15 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.950 11:18:17 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:57.950 00:29:57.950 real 0m21.818s 00:29:57.950 user 0m29.849s 00:29:57.950 sys 0m4.915s 00:29:57.950 11:18:17 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:57.950 11:18:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:57.950 ************************************ 00:29:57.950 END TEST nvmf_identify_passthru 00:29:57.950 ************************************ 00:29:57.950 11:18:17 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:29:57.950 11:18:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:57.950 11:18:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:57.950 11:18:17 -- common/autotest_common.sh@10 -- # set +x 00:29:57.950 ************************************ 00:29:57.950 START TEST nvmf_dif 00:29:57.950 ************************************ 00:29:57.950 11:18:17 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:29:57.950 * Looking for test storage... 00:29:57.950 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:57.950 11:18:17 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:57.950 11:18:17 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:29:57.950 11:18:17 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:57.950 11:18:17 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:57.950 11:18:17 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:57.950 11:18:17 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:57.950 11:18:17 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:57.950 11:18:17 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:57.950 11:18:17 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:57.950 11:18:17 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:57.950 11:18:17 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:57.950 11:18:17 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:57.950 11:18:17 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:57.950 11:18:17 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:57.950 11:18:17 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:57.950 11:18:17 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:57.951 11:18:17 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:57.951 11:18:17 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:57.951 11:18:17 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:57.951 11:18:17 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:57.951 11:18:17 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:57.951 11:18:17 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:57.951 11:18:17 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.951 11:18:17 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.951 11:18:17 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.951 11:18:17 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:29:57.951 11:18:17 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.951 11:18:17 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:29:57.951 11:18:17 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:57.951 11:18:17 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:57.951 11:18:17 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:57.951 11:18:17 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:57.951 11:18:17 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:57.951 11:18:17 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:57.951 11:18:17 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:57.951 11:18:17 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:57.951 11:18:17 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:29:57.951 11:18:17 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:29:57.951 11:18:17 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:29:57.951 11:18:17 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:29:57.951 11:18:17 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:29:57.951 11:18:17 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:57.951 11:18:17 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:57.951 11:18:17 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:57.951 11:18:17 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:57.951 11:18:17 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:57.951 11:18:17 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.951 11:18:17 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:57.951 11:18:17 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.951 11:18:17 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:57.951 11:18:17 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:57.951 11:18:17 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:29:57.951 11:18:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:03.233 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:03.233 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:03.233 11:18:22 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:03.234 Found net devices under 0000:86:00.0: cvl_0_0 00:30:03.234 11:18:22 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:03.234 11:18:22 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:03.234 11:18:22 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:03.234 11:18:22 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:03.234 11:18:22 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:03.234 11:18:22 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:03.234 11:18:22 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:03.234 11:18:22 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:03.234 11:18:22 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:03.234 Found net devices under 0000:86:00.1: cvl_0_1 00:30:03.234 11:18:22 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:03.234 11:18:22 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:03.234 11:18:22 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:30:03.234 11:18:22 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:03.234 11:18:22 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:03.234 11:18:22 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:03.234 11:18:22 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:03.234 11:18:22 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:03.234 11:18:22 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:03.234 11:18:22 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:03.234 11:18:22 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:03.234 11:18:22 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:03.234 11:18:22 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:03.234 11:18:22 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:03.234 11:18:22 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:03.234 11:18:22 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:03.234 11:18:22 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:03.234 11:18:22 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:03.234 11:18:22 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:03.234 11:18:22 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:03.234 11:18:22 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:03.234 11:18:22 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:03.234 11:18:22 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:03.234 11:18:22 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:03.234 11:18:22 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:03.234 11:18:22 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:03.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:03.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:30:03.234 00:30:03.234 --- 10.0.0.2 ping statistics --- 00:30:03.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:03.234 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:30:03.234 11:18:22 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:03.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:03.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.420 ms 00:30:03.234 00:30:03.234 --- 10.0.0.1 ping statistics --- 00:30:03.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:03.234 rtt min/avg/max/mdev = 0.420/0.420/0.420/0.000 ms 00:30:03.234 11:18:22 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:03.234 11:18:22 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:30:03.234 11:18:22 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:03.234 11:18:22 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:05.778 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:30:05.778 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:30:05.778 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:30:05.778 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:30:05.778 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:30:05.778 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:30:05.778 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:30:05.778 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:30:05.778 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:30:05.778 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:30:05.778 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:30:05.778 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:30:05.778 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:30:05.778 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:30:05.778 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:30:05.778 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:30:05.778 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:30:05.778 11:18:25 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:05.778 11:18:25 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:05.778 11:18:25 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:05.778 11:18:25 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:05.778 11:18:25 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:05.778 11:18:25 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:05.778 11:18:25 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:05.778 11:18:25 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:30:05.778 11:18:25 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:05.778 11:18:25 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:05.778 11:18:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:05.778 11:18:25 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1627376 00:30:05.778 11:18:25 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1627376 00:30:05.778 11:18:25 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:05.778 11:18:25 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 1627376 ']' 00:30:05.778 11:18:25 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:05.778 11:18:25 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:05.778 11:18:25 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:05.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:05.778 11:18:25 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:05.778 11:18:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:05.778 [2024-07-26 11:18:25.162287] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:30:05.778 [2024-07-26 11:18:25.162330] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:05.778 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.778 [2024-07-26 11:18:25.220364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.037 [2024-07-26 11:18:25.301977] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:06.037 [2024-07-26 11:18:25.302011] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:06.037 [2024-07-26 11:18:25.302019] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:06.037 [2024-07-26 11:18:25.302024] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:06.037 [2024-07-26 11:18:25.302029] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:06.037 [2024-07-26 11:18:25.302054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:06.607 11:18:25 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:06.607 11:18:25 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:30:06.607 11:18:25 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:06.608 11:18:25 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:06.608 11:18:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:06.608 11:18:25 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:06.608 11:18:25 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:30:06.608 11:18:25 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:06.608 11:18:25 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.608 11:18:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:06.608 [2024-07-26 11:18:25.993891] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:06.608 11:18:25 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.608 11:18:25 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:06.608 11:18:25 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:06.608 11:18:25 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:06.608 11:18:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:06.608 ************************************ 00:30:06.608 START TEST fio_dif_1_default 00:30:06.608 ************************************ 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:06.608 bdev_null0 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:06.608 [2024-07-26 11:18:26.066185] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:06.608 { 00:30:06.608 "params": { 00:30:06.608 "name": "Nvme$subsystem", 00:30:06.608 "trtype": "$TEST_TRANSPORT", 00:30:06.608 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:06.608 "adrfam": "ipv4", 00:30:06.608 "trsvcid": "$NVMF_PORT", 00:30:06.608 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:06.608 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:06.608 "hdgst": ${hdgst:-false}, 00:30:06.608 "ddgst": ${ddgst:-false} 00:30:06.608 }, 00:30:06.608 "method": "bdev_nvme_attach_controller" 00:30:06.608 } 00:30:06.608 EOF 00:30:06.608 )") 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:30:06.608 11:18:26 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:06.608 "params": { 00:30:06.608 "name": "Nvme0", 00:30:06.608 "trtype": "tcp", 00:30:06.608 "traddr": "10.0.0.2", 00:30:06.608 "adrfam": "ipv4", 00:30:06.608 "trsvcid": "4420", 00:30:06.608 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:06.608 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:06.608 "hdgst": false, 00:30:06.608 "ddgst": false 00:30:06.608 }, 00:30:06.608 "method": "bdev_nvme_attach_controller" 00:30:06.608 }' 00:30:06.891 11:18:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:06.891 11:18:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:06.891 11:18:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:06.891 11:18:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:06.891 11:18:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:06.892 11:18:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:06.892 11:18:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:06.892 11:18:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:06.892 11:18:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:06.892 11:18:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:07.150 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:07.150 fio-3.35 00:30:07.150 Starting 1 thread 00:30:07.150 EAL: No free 2048 kB hugepages reported on node 1 00:30:19.348 00:30:19.348 filename0: (groupid=0, jobs=1): err= 0: pid=1627902: Fri Jul 26 11:18:37 2024 00:30:19.348 read: IOPS=177, BW=708KiB/s (725kB/s)(7104KiB/10029msec) 00:30:19.348 slat (nsec): min=5582, max=25176, avg=6154.74, stdev=852.75 00:30:19.348 clat (usec): min=1437, max=45770, avg=22570.46, stdev=20486.54 00:30:19.348 lat (usec): min=1443, max=45795, avg=22576.61, stdev=20486.50 00:30:19.348 clat percentiles (usec): 00:30:19.348 | 1.00th=[ 1909], 5.00th=[ 1909], 10.00th=[ 1926], 20.00th=[ 1942], 00:30:19.348 | 30.00th=[ 1991], 40.00th=[ 2057], 50.00th=[42206], 60.00th=[42730], 00:30:19.348 | 70.00th=[42730], 80.00th=[43254], 90.00th=[43254], 95.00th=[43254], 00:30:19.348 | 99.00th=[44303], 99.50th=[44303], 99.90th=[45876], 99.95th=[45876], 00:30:19.348 | 99.99th=[45876] 00:30:19.348 bw ( KiB/s): min= 672, max= 768, per=99.95%, avg=708.80, stdev=18.79, samples=20 00:30:19.348 iops : min= 168, max= 192, avg=177.20, stdev= 4.70, samples=20 00:30:19.348 lat (msec) : 2=30.41%, 4=19.37%, 50=50.23% 00:30:19.348 cpu : usr=94.68%, sys=5.07%, ctx=9, majf=0, minf=221 00:30:19.348 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:19.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:19.348 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:19.348 issued rwts: total=1776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:19.348 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:19.348 00:30:19.348 Run status group 0 (all jobs): 00:30:19.348 READ: bw=708KiB/s (725kB/s), 708KiB/s-708KiB/s (725kB/s-725kB/s), io=7104KiB (7274kB), run=10029-10029msec 00:30:19.348 11:18:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:30:19.348 11:18:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:30:19.348 11:18:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:30:19.348 11:18:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:19.348 11:18:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:30:19.348 11:18:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:19.348 11:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.348 11:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:19.348 11:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.348 11:18:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:19.348 11:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.348 11:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:19.348 11:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.348 00:30:19.348 real 0m11.176s 00:30:19.348 user 0m15.955s 00:30:19.348 sys 0m0.771s 00:30:19.348 11:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:19.348 11:18:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:19.348 ************************************ 00:30:19.348 END TEST fio_dif_1_default 00:30:19.348 ************************************ 00:30:19.348 11:18:37 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:30:19.348 11:18:37 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:19.348 11:18:37 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:19.348 11:18:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:19.348 ************************************ 00:30:19.348 START TEST fio_dif_1_multi_subsystems 00:30:19.348 ************************************ 00:30:19.348 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:19.349 bdev_null0 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:19.349 [2024-07-26 11:18:37.303694] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:19.349 bdev_null1 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:19.349 { 00:30:19.349 "params": { 00:30:19.349 "name": "Nvme$subsystem", 00:30:19.349 "trtype": "$TEST_TRANSPORT", 00:30:19.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:19.349 "adrfam": "ipv4", 00:30:19.349 "trsvcid": "$NVMF_PORT", 00:30:19.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:19.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:19.349 "hdgst": ${hdgst:-false}, 00:30:19.349 "ddgst": ${ddgst:-false} 00:30:19.349 }, 00:30:19.349 "method": "bdev_nvme_attach_controller" 00:30:19.349 } 00:30:19.349 EOF 00:30:19.349 )") 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:19.349 { 00:30:19.349 "params": { 00:30:19.349 "name": "Nvme$subsystem", 00:30:19.349 "trtype": "$TEST_TRANSPORT", 00:30:19.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:19.349 "adrfam": "ipv4", 00:30:19.349 "trsvcid": "$NVMF_PORT", 00:30:19.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:19.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:19.349 "hdgst": ${hdgst:-false}, 00:30:19.349 "ddgst": ${ddgst:-false} 00:30:19.349 }, 00:30:19.349 "method": "bdev_nvme_attach_controller" 00:30:19.349 } 00:30:19.349 EOF 00:30:19.349 )") 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:19.349 "params": { 00:30:19.349 "name": "Nvme0", 00:30:19.349 "trtype": "tcp", 00:30:19.349 "traddr": "10.0.0.2", 00:30:19.349 "adrfam": "ipv4", 00:30:19.349 "trsvcid": "4420", 00:30:19.349 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:19.349 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:19.349 "hdgst": false, 00:30:19.349 "ddgst": false 00:30:19.349 }, 00:30:19.349 "method": "bdev_nvme_attach_controller" 00:30:19.349 },{ 00:30:19.349 "params": { 00:30:19.349 "name": "Nvme1", 00:30:19.349 "trtype": "tcp", 00:30:19.349 "traddr": "10.0.0.2", 00:30:19.349 "adrfam": "ipv4", 00:30:19.349 "trsvcid": "4420", 00:30:19.349 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:19.349 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:19.349 "hdgst": false, 00:30:19.349 "ddgst": false 00:30:19.349 }, 00:30:19.349 "method": "bdev_nvme_attach_controller" 00:30:19.349 }' 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:19.349 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:19.350 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:19.350 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:19.350 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:19.350 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:19.350 11:18:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:19.350 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:19.350 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:19.350 fio-3.35 00:30:19.350 Starting 2 threads 00:30:19.350 EAL: No free 2048 kB hugepages reported on node 1 00:30:29.374 00:30:29.374 filename0: (groupid=0, jobs=1): err= 0: pid=1629866: Fri Jul 26 11:18:48 2024 00:30:29.374 read: IOPS=93, BW=376KiB/s (385kB/s)(3760KiB/10003msec) 00:30:29.374 slat (nsec): min=4205, max=22604, avg=7710.08, stdev=2531.11 00:30:29.374 clat (usec): min=41782, max=44491, avg=42540.90, stdev=536.96 00:30:29.374 lat (usec): min=41788, max=44504, avg=42548.61, stdev=537.01 00:30:29.374 clat percentiles (usec): 00:30:29.374 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:30:29.374 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42730], 60.00th=[42730], 00:30:29.374 | 70.00th=[42730], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:30:29.374 | 99.00th=[43779], 99.50th=[43779], 99.90th=[44303], 99.95th=[44303], 00:30:29.374 | 99.99th=[44303] 00:30:29.374 bw ( KiB/s): min= 352, max= 384, per=34.54%, avg=375.58, stdev=14.48, samples=19 00:30:29.374 iops : min= 88, max= 96, avg=93.89, stdev= 3.62, samples=19 00:30:29.374 lat (msec) : 50=100.00% 00:30:29.374 cpu : usr=97.82%, sys=1.92%, ctx=13, majf=0, minf=132 00:30:29.374 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:29.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.374 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.374 issued rwts: total=940,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.374 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:29.374 filename1: (groupid=0, jobs=1): err= 0: pid=1629867: Fri Jul 26 11:18:48 2024 00:30:29.374 read: IOPS=177, BW=710KiB/s (727kB/s)(7104KiB/10006msec) 00:30:29.374 slat (nsec): min=5960, max=25313, avg=7094.39, stdev=2071.22 00:30:29.374 clat (usec): min=1023, max=44961, avg=22513.22, stdev=20431.23 00:30:29.374 lat (usec): min=1030, max=44986, avg=22520.32, stdev=20430.59 00:30:29.374 clat percentiles (usec): 00:30:29.374 | 1.00th=[ 1909], 5.00th=[ 1926], 10.00th=[ 1942], 20.00th=[ 1958], 00:30:29.374 | 30.00th=[ 2008], 40.00th=[ 2040], 50.00th=[41681], 60.00th=[42730], 00:30:29.374 | 70.00th=[42730], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:30:29.374 | 99.00th=[43779], 99.50th=[43779], 99.90th=[44827], 99.95th=[44827], 00:30:29.374 | 99.99th=[44827] 00:30:29.374 bw ( KiB/s): min= 672, max= 768, per=65.21%, avg=708.80, stdev=18.79, samples=20 00:30:29.374 iops : min= 168, max= 192, avg=177.20, stdev= 4.70, samples=20 00:30:29.374 lat (msec) : 2=29.05%, 4=20.72%, 50=50.23% 00:30:29.374 cpu : usr=97.76%, sys=1.96%, ctx=16, majf=0, minf=115 00:30:29.374 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:29.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.374 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.374 issued rwts: total=1776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.374 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:29.374 00:30:29.374 Run status group 0 (all jobs): 00:30:29.374 READ: bw=1086KiB/s (1112kB/s), 376KiB/s-710KiB/s (385kB/s-727kB/s), io=10.6MiB (11.1MB), run=10003-10006msec 00:30:29.374 11:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:30:29.374 11:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:30:29.374 11:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:29.374 11:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:29.374 11:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:30:29.374 11:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:29.374 11:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.374 11:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:29.374 11:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.374 11:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:29.374 11:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.374 11:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:29.374 11:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.374 11:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:29.374 11:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:29.374 11:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:30:29.374 11:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:29.374 11:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.374 11:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:29.374 11:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.374 11:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:29.374 11:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.374 11:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:29.374 11:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.374 00:30:29.374 real 0m11.335s 00:30:29.374 user 0m26.043s 00:30:29.374 sys 0m0.669s 00:30:29.374 11:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:29.374 11:18:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:29.374 ************************************ 00:30:29.374 END TEST fio_dif_1_multi_subsystems 00:30:29.374 ************************************ 00:30:29.374 11:18:48 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:30:29.374 11:18:48 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:29.374 11:18:48 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:29.374 11:18:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:29.374 ************************************ 00:30:29.374 START TEST fio_dif_rand_params 00:30:29.374 ************************************ 00:30:29.374 11:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:30:29.374 11:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:30:29.374 11:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:29.375 bdev_null0 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:29.375 [2024-07-26 11:18:48.715633] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:29.375 { 00:30:29.375 "params": { 00:30:29.375 "name": "Nvme$subsystem", 00:30:29.375 "trtype": "$TEST_TRANSPORT", 00:30:29.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:29.375 "adrfam": "ipv4", 00:30:29.375 "trsvcid": "$NVMF_PORT", 00:30:29.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:29.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:29.375 "hdgst": ${hdgst:-false}, 00:30:29.375 "ddgst": ${ddgst:-false} 00:30:29.375 }, 00:30:29.375 "method": "bdev_nvme_attach_controller" 00:30:29.375 } 00:30:29.375 EOF 00:30:29.375 )") 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:29.375 "params": { 00:30:29.375 "name": "Nvme0", 00:30:29.375 "trtype": "tcp", 00:30:29.375 "traddr": "10.0.0.2", 00:30:29.375 "adrfam": "ipv4", 00:30:29.375 "trsvcid": "4420", 00:30:29.375 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:29.375 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:29.375 "hdgst": false, 00:30:29.375 "ddgst": false 00:30:29.375 }, 00:30:29.375 "method": "bdev_nvme_attach_controller" 00:30:29.375 }' 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:29.375 11:18:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:29.634 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:29.634 ... 00:30:29.634 fio-3.35 00:30:29.634 Starting 3 threads 00:30:29.634 EAL: No free 2048 kB hugepages reported on node 1 00:30:36.203 00:30:36.203 filename0: (groupid=0, jobs=1): err= 0: pid=1631653: Fri Jul 26 11:18:54 2024 00:30:36.203 read: IOPS=191, BW=23.9MiB/s (25.1MB/s)(120MiB/5029msec) 00:30:36.203 slat (nsec): min=6242, max=25262, avg=9635.76, stdev=2594.81 00:30:36.203 clat (usec): min=5799, max=92544, avg=15666.77, stdev=15788.88 00:30:36.203 lat (usec): min=5807, max=92551, avg=15676.41, stdev=15789.01 00:30:36.203 clat percentiles (usec): 00:30:36.203 | 1.00th=[ 6259], 5.00th=[ 6783], 10.00th=[ 7177], 20.00th=[ 7898], 00:30:36.203 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9503], 60.00th=[10159], 00:30:36.203 | 70.00th=[10814], 80.00th=[12780], 90.00th=[50594], 95.00th=[52691], 00:30:36.203 | 99.00th=[57934], 99.50th=[88605], 99.90th=[92799], 99.95th=[92799], 00:30:36.203 | 99.99th=[92799] 00:30:36.203 bw ( KiB/s): min=13824, max=33024, per=30.56%, avg=24550.40, stdev=5362.37, samples=10 00:30:36.203 iops : min= 108, max= 258, avg=191.80, stdev=41.89, samples=10 00:30:36.203 lat (msec) : 10=56.96%, 20=28.90%, 50=3.12%, 100=11.02% 00:30:36.203 cpu : usr=95.60%, sys=3.92%, ctx=9, majf=0, minf=104 00:30:36.203 IO depths : 1=2.0%, 2=98.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:36.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:36.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:36.203 issued rwts: total=962,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:36.203 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:36.203 filename0: (groupid=0, jobs=1): err= 0: pid=1631654: Fri Jul 26 11:18:54 2024 00:30:36.203 read: IOPS=271, BW=34.0MiB/s (35.6MB/s)(171MiB/5024msec) 00:30:36.203 slat (nsec): min=3632, max=28670, avg=9185.47, stdev=2515.79 00:30:36.203 clat (usec): min=5341, max=67908, avg=11024.44, stdev=10514.07 00:30:36.203 lat (usec): min=5349, max=67921, avg=11033.63, stdev=10514.13 00:30:36.203 clat percentiles (usec): 00:30:36.203 | 1.00th=[ 5800], 5.00th=[ 6128], 10.00th=[ 6390], 20.00th=[ 6849], 00:30:36.203 | 30.00th=[ 7308], 40.00th=[ 7701], 50.00th=[ 8160], 60.00th=[ 8586], 00:30:36.203 | 70.00th=[ 9241], 80.00th=[10421], 90.00th=[13173], 95.00th=[48497], 00:30:36.203 | 99.00th=[56886], 99.50th=[59507], 99.90th=[67634], 99.95th=[67634], 00:30:36.203 | 99.99th=[67634] 00:30:36.203 bw ( KiB/s): min=22016, max=47104, per=43.41%, avg=34867.20, stdev=8491.25, samples=10 00:30:36.203 iops : min= 172, max= 368, avg=272.40, stdev=66.34, samples=10 00:30:36.203 lat (msec) : 10=77.00%, 20=17.29%, 50=2.12%, 100=3.59% 00:30:36.203 cpu : usr=94.78%, sys=4.54%, ctx=10, majf=0, minf=105 00:30:36.203 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:36.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:36.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:36.203 issued rwts: total=1365,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:36.203 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:36.203 filename0: (groupid=0, jobs=1): err= 0: pid=1631655: Fri Jul 26 11:18:54 2024 00:30:36.203 read: IOPS=166, BW=20.8MiB/s (21.9MB/s)(105MiB/5050msec) 00:30:36.203 slat (nsec): min=4522, max=16420, avg=8907.46, stdev=2738.58 00:30:36.203 clat (usec): min=6359, max=64223, avg=17920.85, stdev=16407.46 00:30:36.203 lat (usec): min=6366, max=64236, avg=17929.76, stdev=16407.58 00:30:36.203 clat percentiles (usec): 00:30:36.203 | 1.00th=[ 6718], 5.00th=[ 7177], 10.00th=[ 7767], 20.00th=[ 8356], 00:30:36.203 | 30.00th=[ 8979], 40.00th=[ 9634], 50.00th=[10421], 60.00th=[11338], 00:30:36.203 | 70.00th=[13960], 80.00th=[17433], 90.00th=[52167], 95.00th=[54264], 00:30:36.203 | 99.00th=[57934], 99.50th=[58459], 99.90th=[64226], 99.95th=[64226], 00:30:36.203 | 99.99th=[64226] 00:30:36.203 bw ( KiB/s): min=14592, max=28416, per=26.74%, avg=21478.40, stdev=4431.50, samples=10 00:30:36.203 iops : min= 114, max= 222, avg=167.80, stdev=34.62, samples=10 00:30:36.203 lat (msec) : 10=43.59%, 20=39.07%, 50=2.38%, 100=14.96% 00:30:36.203 cpu : usr=95.41%, sys=4.02%, ctx=10, majf=0, minf=64 00:30:36.203 IO depths : 1=5.6%, 2=94.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:36.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:36.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:36.203 issued rwts: total=842,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:36.203 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:36.203 00:30:36.203 Run status group 0 (all jobs): 00:30:36.203 READ: bw=78.4MiB/s (82.2MB/s), 20.8MiB/s-34.0MiB/s (21.9MB/s-35.6MB/s), io=396MiB (415MB), run=5024-5050msec 00:30:36.203 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:30:36.203 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:36.203 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:36.203 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:36.203 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:36.203 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:36.203 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.203 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:36.203 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.203 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:36.203 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.203 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:36.203 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.203 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:30:36.203 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:30:36.203 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:30:36.203 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:30:36.203 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:30:36.203 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:30:36.203 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:30:36.203 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:36.203 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:36.203 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:36.203 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:36.203 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:30:36.203 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.203 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:36.203 bdev_null0 00:30:36.203 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.203 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:36.203 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.203 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:36.203 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.203 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:36.203 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.203 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:36.203 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.203 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:36.203 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.203 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:36.203 [2024-07-26 11:18:54.766441] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:36.203 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.203 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:36.203 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:36.204 bdev_null1 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:36.204 bdev_null2 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:36.204 { 00:30:36.204 "params": { 00:30:36.204 "name": "Nvme$subsystem", 00:30:36.204 "trtype": "$TEST_TRANSPORT", 00:30:36.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:36.204 "adrfam": "ipv4", 00:30:36.204 "trsvcid": "$NVMF_PORT", 00:30:36.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:36.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:36.204 "hdgst": ${hdgst:-false}, 00:30:36.204 "ddgst": ${ddgst:-false} 00:30:36.204 }, 00:30:36.204 "method": "bdev_nvme_attach_controller" 00:30:36.204 } 00:30:36.204 EOF 00:30:36.204 )") 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:36.204 { 00:30:36.204 "params": { 00:30:36.204 "name": "Nvme$subsystem", 00:30:36.204 "trtype": "$TEST_TRANSPORT", 00:30:36.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:36.204 "adrfam": "ipv4", 00:30:36.204 "trsvcid": "$NVMF_PORT", 00:30:36.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:36.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:36.204 "hdgst": ${hdgst:-false}, 00:30:36.204 "ddgst": ${ddgst:-false} 00:30:36.204 }, 00:30:36.204 "method": "bdev_nvme_attach_controller" 00:30:36.204 } 00:30:36.204 EOF 00:30:36.204 )") 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:36.204 { 00:30:36.204 "params": { 00:30:36.204 "name": "Nvme$subsystem", 00:30:36.204 "trtype": "$TEST_TRANSPORT", 00:30:36.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:36.204 "adrfam": "ipv4", 00:30:36.204 "trsvcid": "$NVMF_PORT", 00:30:36.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:36.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:36.204 "hdgst": ${hdgst:-false}, 00:30:36.204 "ddgst": ${ddgst:-false} 00:30:36.204 }, 00:30:36.204 "method": "bdev_nvme_attach_controller" 00:30:36.204 } 00:30:36.204 EOF 00:30:36.204 )") 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:36.204 11:18:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:36.204 "params": { 00:30:36.204 "name": "Nvme0", 00:30:36.204 "trtype": "tcp", 00:30:36.204 "traddr": "10.0.0.2", 00:30:36.204 "adrfam": "ipv4", 00:30:36.204 "trsvcid": "4420", 00:30:36.204 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:36.204 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:36.204 "hdgst": false, 00:30:36.204 "ddgst": false 00:30:36.204 }, 00:30:36.204 "method": "bdev_nvme_attach_controller" 00:30:36.204 },{ 00:30:36.204 "params": { 00:30:36.204 "name": "Nvme1", 00:30:36.204 "trtype": "tcp", 00:30:36.204 "traddr": "10.0.0.2", 00:30:36.204 "adrfam": "ipv4", 00:30:36.204 "trsvcid": "4420", 00:30:36.204 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:36.204 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:36.204 "hdgst": false, 00:30:36.204 "ddgst": false 00:30:36.204 }, 00:30:36.204 "method": "bdev_nvme_attach_controller" 00:30:36.204 },{ 00:30:36.204 "params": { 00:30:36.204 "name": "Nvme2", 00:30:36.204 "trtype": "tcp", 00:30:36.204 "traddr": "10.0.0.2", 00:30:36.204 "adrfam": "ipv4", 00:30:36.204 "trsvcid": "4420", 00:30:36.205 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:36.205 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:36.205 "hdgst": false, 00:30:36.205 "ddgst": false 00:30:36.205 }, 00:30:36.205 "method": "bdev_nvme_attach_controller" 00:30:36.205 }' 00:30:36.205 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:36.205 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:36.205 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:36.205 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:36.205 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:36.205 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:36.205 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:36.205 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:36.205 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:36.205 11:18:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:36.205 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:36.205 ... 00:30:36.205 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:36.205 ... 00:30:36.205 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:36.205 ... 00:30:36.205 fio-3.35 00:30:36.205 Starting 24 threads 00:30:36.205 EAL: No free 2048 kB hugepages reported on node 1 00:30:48.413 00:30:48.413 filename0: (groupid=0, jobs=1): err= 0: pid=1632885: Fri Jul 26 11:19:06 2024 00:30:48.413 read: IOPS=638, BW=2555KiB/s (2616kB/s)(25.0MiB/10018msec) 00:30:48.413 slat (usec): min=6, max=384, avg=35.70, stdev=20.44 00:30:48.413 clat (usec): min=5113, max=46377, avg=24764.81, stdev=3344.46 00:30:48.413 lat (usec): min=5129, max=46391, avg=24800.50, stdev=3344.62 00:30:48.413 clat percentiles (usec): 00:30:48.414 | 1.00th=[12256], 5.00th=[20841], 10.00th=[22676], 20.00th=[23462], 00:30:48.414 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24773], 60.00th=[25035], 00:30:48.414 | 70.00th=[25560], 80.00th=[26084], 90.00th=[27132], 95.00th=[28705], 00:30:48.414 | 99.00th=[38011], 99.50th=[40109], 99.90th=[45351], 99.95th=[45351], 00:30:48.414 | 99.99th=[46400] 00:30:48.414 bw ( KiB/s): min= 2432, max= 2800, per=4.38%, avg=2552.80, stdev=97.63, samples=20 00:30:48.414 iops : min= 608, max= 700, avg=638.20, stdev=24.41, samples=20 00:30:48.414 lat (msec) : 10=0.42%, 20=3.92%, 50=95.65% 00:30:48.414 cpu : usr=93.75%, sys=2.47%, ctx=263, majf=0, minf=76 00:30:48.414 IO depths : 1=3.9%, 2=8.7%, 4=21.8%, 8=56.2%, 16=9.5%, 32=0.0%, >=64=0.0% 00:30:48.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.414 complete : 0=0.0%, 4=94.1%, 8=0.4%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.414 issued rwts: total=6398,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:48.414 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:48.414 filename0: (groupid=0, jobs=1): err= 0: pid=1632886: Fri Jul 26 11:19:06 2024 00:30:48.414 read: IOPS=604, BW=2420KiB/s (2478kB/s)(23.7MiB/10015msec) 00:30:48.414 slat (usec): min=6, max=106, avg=32.41, stdev=19.78 00:30:48.414 clat (usec): min=9876, max=51916, avg=26275.00, stdev=4361.66 00:30:48.414 lat (usec): min=9883, max=51948, avg=26307.41, stdev=4361.08 00:30:48.414 clat percentiles (usec): 00:30:48.414 | 1.00th=[16450], 5.00th=[22676], 10.00th=[23462], 20.00th=[23987], 00:30:48.414 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25035], 60.00th=[25560], 00:30:48.414 | 70.00th=[26346], 80.00th=[27657], 90.00th=[31851], 95.00th=[35914], 00:30:48.414 | 99.00th=[43254], 99.50th=[45351], 99.90th=[51119], 99.95th=[51643], 00:30:48.414 | 99.99th=[52167] 00:30:48.414 bw ( KiB/s): min= 2176, max= 2560, per=4.14%, avg=2411.79, stdev=127.65, samples=19 00:30:48.414 iops : min= 544, max= 640, avg=602.95, stdev=31.91, samples=19 00:30:48.414 lat (msec) : 10=0.10%, 20=2.13%, 50=97.64%, 100=0.13% 00:30:48.414 cpu : usr=96.24%, sys=1.87%, ctx=65, majf=0, minf=65 00:30:48.414 IO depths : 1=0.2%, 2=0.5%, 4=6.6%, 8=78.4%, 16=14.3%, 32=0.0%, >=64=0.0% 00:30:48.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.414 complete : 0=0.0%, 4=90.0%, 8=6.2%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.414 issued rwts: total=6058,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:48.414 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:48.414 filename0: (groupid=0, jobs=1): err= 0: pid=1632887: Fri Jul 26 11:19:06 2024 00:30:48.414 read: IOPS=608, BW=2433KiB/s (2491kB/s)(23.8MiB/10014msec) 00:30:48.414 slat (usec): min=6, max=107, avg=32.93, stdev=19.76 00:30:48.414 clat (usec): min=7618, max=46927, avg=26118.42, stdev=4168.39 00:30:48.414 lat (usec): min=7633, max=46935, avg=26151.35, stdev=4167.98 00:30:48.414 clat percentiles (usec): 00:30:48.414 | 1.00th=[15008], 5.00th=[22152], 10.00th=[23200], 20.00th=[23987], 00:30:48.414 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25297], 60.00th=[25560], 00:30:48.414 | 70.00th=[26346], 80.00th=[27395], 90.00th=[31589], 95.00th=[35390], 00:30:48.414 | 99.00th=[40633], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:30:48.414 | 99.99th=[46924] 00:30:48.414 bw ( KiB/s): min= 2296, max= 2560, per=4.16%, avg=2426.53, stdev=83.27, samples=19 00:30:48.414 iops : min= 574, max= 640, avg=606.63, stdev=20.82, samples=19 00:30:48.414 lat (msec) : 10=0.05%, 20=2.84%, 50=97.11% 00:30:48.414 cpu : usr=98.74%, sys=0.87%, ctx=18, majf=0, minf=81 00:30:48.414 IO depths : 1=1.3%, 2=2.6%, 4=10.1%, 8=72.4%, 16=13.6%, 32=0.0%, >=64=0.0% 00:30:48.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.414 complete : 0=0.0%, 4=90.9%, 8=5.7%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.414 issued rwts: total=6090,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:48.414 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:48.414 filename0: (groupid=0, jobs=1): err= 0: pid=1632888: Fri Jul 26 11:19:06 2024 00:30:48.414 read: IOPS=636, BW=2547KiB/s (2608kB/s)(24.9MiB/10003msec) 00:30:48.414 slat (nsec): min=6338, max=96987, avg=36057.08, stdev=16386.58 00:30:48.414 clat (usec): min=4878, max=52658, avg=24821.27, stdev=2624.28 00:30:48.414 lat (usec): min=4884, max=52677, avg=24857.32, stdev=2626.39 00:30:48.414 clat percentiles (usec): 00:30:48.414 | 1.00th=[16581], 5.00th=[22414], 10.00th=[22938], 20.00th=[23725], 00:30:48.414 | 30.00th=[24249], 40.00th=[24511], 50.00th=[24773], 60.00th=[25035], 00:30:48.414 | 70.00th=[25560], 80.00th=[26084], 90.00th=[26608], 95.00th=[27657], 00:30:48.414 | 99.00th=[34341], 99.50th=[38011], 99.90th=[40633], 99.95th=[40633], 00:30:48.414 | 99.99th=[52691] 00:30:48.414 bw ( KiB/s): min= 2304, max= 2688, per=4.35%, avg=2533.26, stdev=94.94, samples=19 00:30:48.414 iops : min= 576, max= 672, avg=633.26, stdev=23.69, samples=19 00:30:48.414 lat (msec) : 10=0.35%, 20=1.82%, 50=97.79%, 100=0.05% 00:30:48.414 cpu : usr=98.91%, sys=0.71%, ctx=17, majf=0, minf=75 00:30:48.414 IO depths : 1=4.4%, 2=8.9%, 4=20.2%, 8=57.6%, 16=8.9%, 32=0.0%, >=64=0.0% 00:30:48.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.414 complete : 0=0.0%, 4=93.2%, 8=1.8%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.414 issued rwts: total=6370,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:48.414 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:48.414 filename0: (groupid=0, jobs=1): err= 0: pid=1632889: Fri Jul 26 11:19:06 2024 00:30:48.414 read: IOPS=595, BW=2382KiB/s (2439kB/s)(23.3MiB/10007msec) 00:30:48.414 slat (usec): min=6, max=105, avg=26.27, stdev=19.18 00:30:48.414 clat (usec): min=7049, max=50185, avg=26722.74, stdev=5152.72 00:30:48.414 lat (usec): min=7063, max=50203, avg=26749.00, stdev=5150.61 00:30:48.414 clat percentiles (usec): 00:30:48.414 | 1.00th=[13173], 5.00th=[21890], 10.00th=[23200], 20.00th=[23987], 00:30:48.414 | 30.00th=[24511], 40.00th=[25035], 50.00th=[25297], 60.00th=[26084], 00:30:48.414 | 70.00th=[27132], 80.00th=[28443], 90.00th=[33817], 95.00th=[37487], 00:30:48.414 | 99.00th=[45351], 99.50th=[45876], 99.90th=[50070], 99.95th=[50070], 00:30:48.414 | 99.99th=[50070] 00:30:48.414 bw ( KiB/s): min= 2128, max= 2576, per=4.06%, avg=2368.47, stdev=137.89, samples=19 00:30:48.414 iops : min= 532, max= 644, avg=592.05, stdev=34.48, samples=19 00:30:48.414 lat (msec) : 10=0.27%, 20=3.46%, 50=96.17%, 100=0.10% 00:30:48.414 cpu : usr=97.66%, sys=1.14%, ctx=56, majf=0, minf=80 00:30:48.414 IO depths : 1=0.2%, 2=1.0%, 4=9.9%, 8=74.6%, 16=14.4%, 32=0.0%, >=64=0.0% 00:30:48.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.414 complete : 0=0.0%, 4=91.1%, 8=5.0%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.414 issued rwts: total=5959,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:48.414 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:48.414 filename0: (groupid=0, jobs=1): err= 0: pid=1632890: Fri Jul 26 11:19:06 2024 00:30:48.414 read: IOPS=638, BW=2556KiB/s (2617kB/s)(25.0MiB/10014msec) 00:30:48.414 slat (usec): min=6, max=100, avg=33.72, stdev=19.12 00:30:48.414 clat (usec): min=9113, max=45206, avg=24769.71, stdev=3059.72 00:30:48.414 lat (usec): min=9123, max=45266, avg=24803.43, stdev=3060.49 00:30:48.414 clat percentiles (usec): 00:30:48.414 | 1.00th=[12649], 5.00th=[21365], 10.00th=[22676], 20.00th=[23725], 00:30:48.414 | 30.00th=[23987], 40.00th=[24511], 50.00th=[24773], 60.00th=[25035], 00:30:48.414 | 70.00th=[25560], 80.00th=[26084], 90.00th=[26870], 95.00th=[28181], 00:30:48.414 | 99.00th=[35390], 99.50th=[41681], 99.90th=[44827], 99.95th=[45351], 00:30:48.414 | 99.99th=[45351] 00:30:48.414 bw ( KiB/s): min= 2432, max= 2688, per=4.38%, avg=2553.05, stdev=76.06, samples=20 00:30:48.414 iops : min= 608, max= 672, avg=638.25, stdev=19.01, samples=20 00:30:48.414 lat (msec) : 10=0.25%, 20=3.52%, 50=96.23% 00:30:48.414 cpu : usr=98.72%, sys=0.87%, ctx=16, majf=0, minf=66 00:30:48.414 IO depths : 1=4.5%, 2=9.2%, 4=21.3%, 8=56.4%, 16=8.6%, 32=0.0%, >=64=0.0% 00:30:48.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.414 complete : 0=0.0%, 4=93.9%, 8=0.7%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.414 issued rwts: total=6398,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:48.414 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:48.414 filename0: (groupid=0, jobs=1): err= 0: pid=1632891: Fri Jul 26 11:19:06 2024 00:30:48.414 read: IOPS=626, BW=2505KiB/s (2565kB/s)(24.5MiB/10025msec) 00:30:48.414 slat (usec): min=6, max=1678, avg=27.17, stdev=26.41 00:30:48.414 clat (usec): min=6559, max=49431, avg=25360.12, stdev=3749.39 00:30:48.414 lat (usec): min=6576, max=49454, avg=25387.29, stdev=3752.09 00:30:48.414 clat percentiles (usec): 00:30:48.414 | 1.00th=[13698], 5.00th=[20579], 10.00th=[22938], 20.00th=[23725], 00:30:48.414 | 30.00th=[24249], 40.00th=[24511], 50.00th=[25035], 60.00th=[25297], 00:30:48.414 | 70.00th=[25822], 80.00th=[26870], 90.00th=[28705], 95.00th=[31327], 00:30:48.414 | 99.00th=[39584], 99.50th=[41157], 99.90th=[47449], 99.95th=[49546], 00:30:48.414 | 99.99th=[49546] 00:30:48.414 bw ( KiB/s): min= 2176, max= 2816, per=4.30%, avg=2506.80, stdev=137.91, samples=20 00:30:48.414 iops : min= 544, max= 704, avg=626.70, stdev=34.48, samples=20 00:30:48.414 lat (msec) : 10=0.46%, 20=4.03%, 50=95.51% 00:30:48.414 cpu : usr=97.94%, sys=1.25%, ctx=34, majf=0, minf=113 00:30:48.414 IO depths : 1=1.8%, 2=3.9%, 4=12.3%, 8=70.9%, 16=11.1%, 32=0.0%, >=64=0.0% 00:30:48.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.414 complete : 0=0.0%, 4=90.9%, 8=3.8%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.414 issued rwts: total=6279,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:48.414 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:48.414 filename0: (groupid=0, jobs=1): err= 0: pid=1632892: Fri Jul 26 11:19:06 2024 00:30:48.414 read: IOPS=642, BW=2569KiB/s (2631kB/s)(25.1MiB/10010msec) 00:30:48.414 slat (usec): min=6, max=156, avg=35.72, stdev=18.79 00:30:48.414 clat (usec): min=11527, max=45939, avg=24615.91, stdev=2651.69 00:30:48.414 lat (usec): min=11541, max=45957, avg=24651.63, stdev=2654.61 00:30:48.414 clat percentiles (usec): 00:30:48.414 | 1.00th=[14615], 5.00th=[21103], 10.00th=[22414], 20.00th=[23462], 00:30:48.414 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24773], 60.00th=[25035], 00:30:48.414 | 70.00th=[25297], 80.00th=[26084], 90.00th=[26870], 95.00th=[27657], 00:30:48.414 | 99.00th=[31589], 99.50th=[34341], 99.90th=[45876], 99.95th=[45876], 00:30:48.414 | 99.99th=[45876] 00:30:48.414 bw ( KiB/s): min= 2304, max= 2992, per=4.40%, avg=2566.74, stdev=153.77, samples=19 00:30:48.414 iops : min= 576, max= 748, avg=641.68, stdev=38.44, samples=19 00:30:48.415 lat (msec) : 20=3.67%, 50=96.33% 00:30:48.415 cpu : usr=98.74%, sys=0.85%, ctx=31, majf=0, minf=56 00:30:48.415 IO depths : 1=4.8%, 2=9.7%, 4=21.0%, 8=56.3%, 16=8.2%, 32=0.0%, >=64=0.0% 00:30:48.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.415 complete : 0=0.0%, 4=93.3%, 8=1.4%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.415 issued rwts: total=6430,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:48.415 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:48.415 filename1: (groupid=0, jobs=1): err= 0: pid=1632893: Fri Jul 26 11:19:06 2024 00:30:48.415 read: IOPS=625, BW=2501KiB/s (2561kB/s)(24.5MiB/10017msec) 00:30:48.415 slat (usec): min=6, max=102, avg=30.57, stdev=18.77 00:30:48.415 clat (usec): min=5541, max=49478, avg=25410.16, stdev=4298.77 00:30:48.415 lat (usec): min=5553, max=49546, avg=25440.72, stdev=4300.82 00:30:48.415 clat percentiles (usec): 00:30:48.415 | 1.00th=[12256], 5.00th=[19792], 10.00th=[22676], 20.00th=[23725], 00:30:48.415 | 30.00th=[24249], 40.00th=[24511], 50.00th=[25035], 60.00th=[25297], 00:30:48.415 | 70.00th=[25822], 80.00th=[26608], 90.00th=[28705], 95.00th=[33424], 00:30:48.415 | 99.00th=[41157], 99.50th=[43779], 99.90th=[49021], 99.95th=[49546], 00:30:48.415 | 99.99th=[49546] 00:30:48.415 bw ( KiB/s): min= 2176, max= 2893, per=4.29%, avg=2502.30, stdev=147.15, samples=20 00:30:48.415 iops : min= 544, max= 723, avg=625.55, stdev=36.76, samples=20 00:30:48.415 lat (msec) : 10=0.73%, 20=4.41%, 50=94.86% 00:30:48.415 cpu : usr=98.73%, sys=0.82%, ctx=71, majf=0, minf=61 00:30:48.415 IO depths : 1=1.0%, 2=2.3%, 4=10.4%, 8=73.6%, 16=12.7%, 32=0.0%, >=64=0.0% 00:30:48.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.415 complete : 0=0.0%, 4=90.8%, 8=4.5%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.415 issued rwts: total=6263,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:48.415 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:48.415 filename1: (groupid=0, jobs=1): err= 0: pid=1632894: Fri Jul 26 11:19:06 2024 00:30:48.415 read: IOPS=625, BW=2503KiB/s (2563kB/s)(24.5MiB/10012msec) 00:30:48.415 slat (usec): min=6, max=102, avg=27.20, stdev=17.51 00:30:48.415 clat (usec): min=8206, max=49567, avg=25391.93, stdev=4016.93 00:30:48.415 lat (usec): min=8214, max=49575, avg=25419.13, stdev=4016.94 00:30:48.415 clat percentiles (usec): 00:30:48.415 | 1.00th=[12125], 5.00th=[20055], 10.00th=[22676], 20.00th=[23725], 00:30:48.415 | 30.00th=[24249], 40.00th=[24773], 50.00th=[25035], 60.00th=[25560], 00:30:48.415 | 70.00th=[26084], 80.00th=[26870], 90.00th=[28705], 95.00th=[32637], 00:30:48.415 | 99.00th=[39584], 99.50th=[42206], 99.90th=[46400], 99.95th=[49546], 00:30:48.415 | 99.99th=[49546] 00:30:48.415 bw ( KiB/s): min= 2352, max= 2837, per=4.29%, avg=2503.16, stdev=124.99, samples=19 00:30:48.415 iops : min= 588, max= 709, avg=625.74, stdev=31.22, samples=19 00:30:48.415 lat (msec) : 10=0.40%, 20=4.60%, 50=95.00% 00:30:48.415 cpu : usr=98.74%, sys=0.81%, ctx=39, majf=0, minf=62 00:30:48.415 IO depths : 1=2.0%, 2=4.3%, 4=14.3%, 8=68.1%, 16=11.4%, 32=0.0%, >=64=0.0% 00:30:48.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.415 complete : 0=0.0%, 4=91.9%, 8=3.2%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.415 issued rwts: total=6265,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:48.415 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:48.415 filename1: (groupid=0, jobs=1): err= 0: pid=1632895: Fri Jul 26 11:19:06 2024 00:30:48.415 read: IOPS=599, BW=2398KiB/s (2455kB/s)(23.4MiB/10002msec) 00:30:48.415 slat (usec): min=4, max=105, avg=31.16, stdev=19.48 00:30:48.415 clat (usec): min=6750, max=52373, avg=26523.53, stdev=4861.69 00:30:48.415 lat (usec): min=6793, max=52385, avg=26554.69, stdev=4860.54 00:30:48.415 clat percentiles (usec): 00:30:48.415 | 1.00th=[14877], 5.00th=[21890], 10.00th=[22938], 20.00th=[23987], 00:30:48.415 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25297], 60.00th=[25822], 00:30:48.415 | 70.00th=[26608], 80.00th=[28443], 90.00th=[33424], 95.00th=[36963], 00:30:48.415 | 99.00th=[41681], 99.50th=[45351], 99.90th=[47973], 99.95th=[52167], 00:30:48.415 | 99.99th=[52167] 00:30:48.415 bw ( KiB/s): min= 2128, max= 2560, per=4.09%, avg=2382.58, stdev=108.83, samples=19 00:30:48.415 iops : min= 532, max= 640, avg=595.58, stdev=27.17, samples=19 00:30:48.415 lat (msec) : 10=0.50%, 20=2.84%, 50=96.58%, 100=0.08% 00:30:48.415 cpu : usr=98.13%, sys=1.06%, ctx=348, majf=0, minf=52 00:30:48.415 IO depths : 1=0.6%, 2=1.3%, 4=7.8%, 8=76.1%, 16=14.1%, 32=0.0%, >=64=0.0% 00:30:48.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.415 complete : 0=0.0%, 4=90.2%, 8=6.2%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.415 issued rwts: total=5996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:48.415 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:48.415 filename1: (groupid=0, jobs=1): err= 0: pid=1632896: Fri Jul 26 11:19:06 2024 00:30:48.415 read: IOPS=583, BW=2332KiB/s (2388kB/s)(22.8MiB/10008msec) 00:30:48.415 slat (usec): min=6, max=118, avg=30.30, stdev=18.20 00:30:48.415 clat (usec): min=8735, max=56498, avg=27244.62, stdev=5349.68 00:30:48.415 lat (usec): min=8742, max=56516, avg=27274.92, stdev=5348.79 00:30:48.415 clat percentiles (usec): 00:30:48.415 | 1.00th=[14353], 5.00th=[20841], 10.00th=[23200], 20.00th=[23987], 00:30:48.415 | 30.00th=[24511], 40.00th=[25035], 50.00th=[25560], 60.00th=[26346], 00:30:48.415 | 70.00th=[28443], 80.00th=[31589], 90.00th=[34866], 95.00th=[38011], 00:30:48.415 | 99.00th=[41681], 99.50th=[43779], 99.90th=[46924], 99.95th=[56361], 00:30:48.415 | 99.99th=[56361] 00:30:48.415 bw ( KiB/s): min= 1920, max= 2688, per=3.98%, avg=2318.00, stdev=208.14, samples=19 00:30:48.415 iops : min= 480, max= 672, avg=579.47, stdev=52.02, samples=19 00:30:48.415 lat (msec) : 10=0.21%, 20=3.48%, 50=96.23%, 100=0.09% 00:30:48.415 cpu : usr=98.58%, sys=1.00%, ctx=22, majf=0, minf=66 00:30:48.415 IO depths : 1=1.4%, 2=2.8%, 4=13.9%, 8=69.8%, 16=12.2%, 32=0.0%, >=64=0.0% 00:30:48.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.415 complete : 0=0.0%, 4=92.0%, 8=3.5%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.415 issued rwts: total=5835,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:48.415 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:48.415 filename1: (groupid=0, jobs=1): err= 0: pid=1632897: Fri Jul 26 11:19:06 2024 00:30:48.415 read: IOPS=634, BW=2537KiB/s (2598kB/s)(24.9MiB/10054msec) 00:30:48.415 slat (usec): min=6, max=119, avg=37.07, stdev=19.02 00:30:48.415 clat (usec): min=7563, max=57328, avg=24892.20, stdev=3657.72 00:30:48.415 lat (usec): min=7598, max=57396, avg=24929.27, stdev=3659.72 00:30:48.415 clat percentiles (usec): 00:30:48.415 | 1.00th=[12780], 5.00th=[21103], 10.00th=[22676], 20.00th=[23462], 00:30:48.415 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24773], 60.00th=[25035], 00:30:48.415 | 70.00th=[25560], 80.00th=[26084], 90.00th=[27132], 95.00th=[28443], 00:30:48.415 | 99.00th=[39060], 99.50th=[42730], 99.90th=[56886], 99.95th=[57410], 00:30:48.415 | 99.99th=[57410] 00:30:48.415 bw ( KiB/s): min= 2432, max= 2688, per=4.37%, avg=2545.60, stdev=77.85, samples=20 00:30:48.415 iops : min= 608, max= 672, avg=636.40, stdev=19.46, samples=20 00:30:48.415 lat (msec) : 10=0.13%, 20=3.91%, 50=95.78%, 100=0.19% 00:30:48.415 cpu : usr=97.69%, sys=1.21%, ctx=31, majf=0, minf=78 00:30:48.415 IO depths : 1=4.3%, 2=9.3%, 4=21.3%, 8=56.5%, 16=8.6%, 32=0.0%, >=64=0.0% 00:30:48.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.415 complete : 0=0.0%, 4=93.3%, 8=1.2%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.415 issued rwts: total=6376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:48.415 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:48.415 filename1: (groupid=0, jobs=1): err= 0: pid=1632898: Fri Jul 26 11:19:06 2024 00:30:48.415 read: IOPS=587, BW=2349KiB/s (2405kB/s)(23.0MiB/10014msec) 00:30:48.415 slat (nsec): min=6222, max=87616, avg=27020.11, stdev=17638.99 00:30:48.415 clat (usec): min=10569, max=51991, avg=27090.39, stdev=5240.58 00:30:48.415 lat (usec): min=10579, max=52010, avg=27117.41, stdev=5238.93 00:30:48.415 clat percentiles (usec): 00:30:48.415 | 1.00th=[15926], 5.00th=[22414], 10.00th=[23200], 20.00th=[23987], 00:30:48.415 | 30.00th=[24511], 40.00th=[25035], 50.00th=[25560], 60.00th=[26084], 00:30:48.415 | 70.00th=[27395], 80.00th=[29492], 90.00th=[34341], 95.00th=[38536], 00:30:48.415 | 99.00th=[45351], 99.50th=[47973], 99.90th=[51643], 99.95th=[52167], 00:30:48.415 | 99.99th=[52167] 00:30:48.415 bw ( KiB/s): min= 2104, max= 2656, per=4.04%, avg=2353.26, stdev=151.87, samples=19 00:30:48.415 iops : min= 526, max= 664, avg=588.32, stdev=37.97, samples=19 00:30:48.415 lat (msec) : 20=2.24%, 50=97.65%, 100=0.10% 00:30:48.415 cpu : usr=98.73%, sys=0.82%, ctx=21, majf=0, minf=70 00:30:48.415 IO depths : 1=1.0%, 2=2.4%, 4=10.5%, 8=72.9%, 16=13.1%, 32=0.0%, >=64=0.0% 00:30:48.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.415 complete : 0=0.0%, 4=91.2%, 8=4.4%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.415 issued rwts: total=5880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:48.415 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:48.415 filename1: (groupid=0, jobs=1): err= 0: pid=1632899: Fri Jul 26 11:19:06 2024 00:30:48.415 read: IOPS=610, BW=2444KiB/s (2502kB/s)(23.9MiB/10011msec) 00:30:48.415 slat (nsec): min=6276, max=86653, avg=26654.64, stdev=17962.12 00:30:48.415 clat (usec): min=10079, max=47840, avg=26064.06, stdev=3999.41 00:30:48.415 lat (usec): min=10096, max=47868, avg=26090.72, stdev=3998.58 00:30:48.415 clat percentiles (usec): 00:30:48.415 | 1.00th=[16188], 5.00th=[22676], 10.00th=[23462], 20.00th=[23987], 00:30:48.415 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25035], 60.00th=[25560], 00:30:48.415 | 70.00th=[26084], 80.00th=[27132], 90.00th=[31065], 95.00th=[34341], 00:30:48.415 | 99.00th=[41157], 99.50th=[43254], 99.90th=[46924], 99.95th=[47973], 00:30:48.415 | 99.99th=[47973] 00:30:48.415 bw ( KiB/s): min= 2120, max= 2608, per=4.17%, avg=2433.89, stdev=115.52, samples=19 00:30:48.415 iops : min= 530, max= 652, avg=608.47, stdev=28.88, samples=19 00:30:48.415 lat (msec) : 20=1.95%, 50=98.05% 00:30:48.415 cpu : usr=98.62%, sys=0.87%, ctx=124, majf=0, minf=81 00:30:48.415 IO depths : 1=0.1%, 2=0.2%, 4=5.5%, 8=78.1%, 16=16.1%, 32=0.0%, >=64=0.0% 00:30:48.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.415 complete : 0=0.0%, 4=90.4%, 8=7.3%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.415 issued rwts: total=6116,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:48.415 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:48.415 filename1: (groupid=0, jobs=1): err= 0: pid=1632900: Fri Jul 26 11:19:06 2024 00:30:48.416 read: IOPS=603, BW=2413KiB/s (2471kB/s)(23.6MiB/10013msec) 00:30:48.416 slat (usec): min=6, max=479, avg=31.49, stdev=21.55 00:30:48.416 clat (usec): min=5521, max=52247, avg=26325.33, stdev=4550.91 00:30:48.416 lat (usec): min=5532, max=52258, avg=26356.83, stdev=4547.24 00:30:48.416 clat percentiles (usec): 00:30:48.416 | 1.00th=[15008], 5.00th=[22414], 10.00th=[23200], 20.00th=[23987], 00:30:48.416 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25297], 60.00th=[25822], 00:30:48.416 | 70.00th=[26346], 80.00th=[27919], 90.00th=[31851], 95.00th=[35390], 00:30:48.416 | 99.00th=[42730], 99.50th=[45351], 99.90th=[50594], 99.95th=[52167], 00:30:48.416 | 99.99th=[52167] 00:30:48.416 bw ( KiB/s): min= 2096, max= 2672, per=4.12%, avg=2404.21, stdev=163.94, samples=19 00:30:48.416 iops : min= 524, max= 668, avg=601.05, stdev=40.98, samples=19 00:30:48.416 lat (msec) : 10=0.28%, 20=2.30%, 50=97.25%, 100=0.17% 00:30:48.416 cpu : usr=89.19%, sys=4.30%, ctx=272, majf=0, minf=51 00:30:48.416 IO depths : 1=0.9%, 2=2.0%, 4=11.9%, 8=72.2%, 16=12.9%, 32=0.0%, >=64=0.0% 00:30:48.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.416 complete : 0=0.0%, 4=91.7%, 8=3.9%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.416 issued rwts: total=6040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:48.416 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:48.416 filename2: (groupid=0, jobs=1): err= 0: pid=1632901: Fri Jul 26 11:19:06 2024 00:30:48.416 read: IOPS=620, BW=2482KiB/s (2542kB/s)(24.3MiB/10010msec) 00:30:48.416 slat (usec): min=6, max=109, avg=34.48, stdev=19.85 00:30:48.416 clat (usec): min=6694, max=46619, avg=25547.15, stdev=4292.97 00:30:48.416 lat (usec): min=6703, max=46678, avg=25581.63, stdev=4293.60 00:30:48.416 clat percentiles (usec): 00:30:48.416 | 1.00th=[12256], 5.00th=[21103], 10.00th=[22938], 20.00th=[23725], 00:30:48.416 | 30.00th=[24249], 40.00th=[24773], 50.00th=[25035], 60.00th=[25560], 00:30:48.416 | 70.00th=[26084], 80.00th=[26608], 90.00th=[29230], 95.00th=[34341], 00:30:48.416 | 99.00th=[40633], 99.50th=[41681], 99.90th=[44827], 99.95th=[46400], 00:30:48.416 | 99.99th=[46400] 00:30:48.416 bw ( KiB/s): min= 2288, max= 2656, per=4.26%, avg=2486.42, stdev=95.60, samples=19 00:30:48.416 iops : min= 572, max= 664, avg=621.58, stdev=23.88, samples=19 00:30:48.416 lat (msec) : 10=0.47%, 20=4.06%, 50=95.48% 00:30:48.416 cpu : usr=97.73%, sys=1.29%, ctx=109, majf=0, minf=76 00:30:48.416 IO depths : 1=0.8%, 2=2.6%, 4=15.8%, 8=68.1%, 16=12.6%, 32=0.0%, >=64=0.0% 00:30:48.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.416 complete : 0=0.0%, 4=92.8%, 8=2.4%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.416 issued rwts: total=6212,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:48.416 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:48.416 filename2: (groupid=0, jobs=1): err= 0: pid=1632902: Fri Jul 26 11:19:06 2024 00:30:48.416 read: IOPS=649, BW=2598KiB/s (2660kB/s)(25.4MiB/10022msec) 00:30:48.416 slat (usec): min=6, max=135, avg=32.40, stdev=17.47 00:30:48.416 clat (usec): min=8444, max=47642, avg=24378.33, stdev=3468.36 00:30:48.416 lat (usec): min=8453, max=47656, avg=24410.73, stdev=3471.39 00:30:48.416 clat percentiles (usec): 00:30:48.416 | 1.00th=[10552], 5.00th=[17695], 10.00th=[22152], 20.00th=[23462], 00:30:48.416 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24511], 60.00th=[25035], 00:30:48.416 | 70.00th=[25297], 80.00th=[26084], 90.00th=[26870], 95.00th=[27657], 00:30:48.416 | 99.00th=[37487], 99.50th=[39060], 99.90th=[45351], 99.95th=[45876], 00:30:48.416 | 99.99th=[47449] 00:30:48.416 bw ( KiB/s): min= 2432, max= 2896, per=4.46%, avg=2598.50, stdev=122.55, samples=20 00:30:48.416 iops : min= 608, max= 724, avg=649.60, stdev=30.65, samples=20 00:30:48.416 lat (msec) : 10=0.54%, 20=5.96%, 50=93.50% 00:30:48.416 cpu : usr=98.52%, sys=1.05%, ctx=73, majf=0, minf=75 00:30:48.416 IO depths : 1=4.6%, 2=9.6%, 4=21.5%, 8=55.7%, 16=8.6%, 32=0.0%, >=64=0.0% 00:30:48.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.416 complete : 0=0.0%, 4=93.9%, 8=0.7%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.416 issued rwts: total=6509,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:48.416 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:48.416 filename2: (groupid=0, jobs=1): err= 0: pid=1632903: Fri Jul 26 11:19:06 2024 00:30:48.416 read: IOPS=567, BW=2271KiB/s (2325kB/s)(22.2MiB/10003msec) 00:30:48.416 slat (nsec): min=6419, max=93172, avg=23869.72, stdev=19647.07 00:30:48.416 clat (usec): min=5021, max=55169, avg=28053.59, stdev=6447.22 00:30:48.416 lat (usec): min=5029, max=55205, avg=28077.46, stdev=6444.64 00:30:48.416 clat percentiles (usec): 00:30:48.416 | 1.00th=[12256], 5.00th=[22414], 10.00th=[23462], 20.00th=[24249], 00:30:48.416 | 30.00th=[24773], 40.00th=[25297], 50.00th=[26084], 60.00th=[27132], 00:30:48.416 | 70.00th=[28443], 80.00th=[32113], 90.00th=[37487], 95.00th=[41681], 00:30:48.416 | 99.00th=[47973], 99.50th=[52691], 99.90th=[54789], 99.95th=[54789], 00:30:48.416 | 99.99th=[55313] 00:30:48.416 bw ( KiB/s): min= 1920, max= 2576, per=3.85%, avg=2246.68, stdev=189.29, samples=19 00:30:48.416 iops : min= 480, max= 644, avg=561.63, stdev=47.35, samples=19 00:30:48.416 lat (msec) : 10=0.51%, 20=2.78%, 50=95.93%, 100=0.77% 00:30:48.416 cpu : usr=97.87%, sys=1.23%, ctx=191, majf=0, minf=83 00:30:48.416 IO depths : 1=0.1%, 2=0.4%, 4=8.2%, 8=76.6%, 16=14.7%, 32=0.0%, >=64=0.0% 00:30:48.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.416 complete : 0=0.0%, 4=90.8%, 8=5.6%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.416 issued rwts: total=5679,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:48.416 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:48.416 filename2: (groupid=0, jobs=1): err= 0: pid=1632904: Fri Jul 26 11:19:06 2024 00:30:48.416 read: IOPS=591, BW=2366KiB/s (2423kB/s)(23.1MiB/10003msec) 00:30:48.416 slat (usec): min=5, max=101, avg=25.33, stdev=18.91 00:30:48.416 clat (usec): min=6392, max=51413, avg=26924.60, stdev=5235.46 00:30:48.416 lat (usec): min=6399, max=51431, avg=26949.93, stdev=5234.01 00:30:48.416 clat percentiles (usec): 00:30:48.416 | 1.00th=[16450], 5.00th=[22414], 10.00th=[23200], 20.00th=[24249], 00:30:48.416 | 30.00th=[24511], 40.00th=[25035], 50.00th=[25297], 60.00th=[25822], 00:30:48.416 | 70.00th=[27132], 80.00th=[28967], 90.00th=[33817], 95.00th=[39060], 00:30:48.416 | 99.00th=[46400], 99.50th=[48497], 99.90th=[51119], 99.95th=[51643], 00:30:48.416 | 99.99th=[51643] 00:30:48.416 bw ( KiB/s): min= 2096, max= 2560, per=4.02%, avg=2345.16, stdev=118.21, samples=19 00:30:48.416 iops : min= 524, max= 640, avg=586.21, stdev=29.54, samples=19 00:30:48.416 lat (msec) : 10=0.34%, 20=2.20%, 50=97.13%, 100=0.34% 00:30:48.416 cpu : usr=98.66%, sys=0.78%, ctx=96, majf=0, minf=58 00:30:48.416 IO depths : 1=0.1%, 2=0.3%, 4=4.9%, 8=79.5%, 16=15.2%, 32=0.0%, >=64=0.0% 00:30:48.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.416 complete : 0=0.0%, 4=89.9%, 8=6.7%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.416 issued rwts: total=5917,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:48.416 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:48.416 filename2: (groupid=0, jobs=1): err= 0: pid=1632905: Fri Jul 26 11:19:06 2024 00:30:48.416 read: IOPS=559, BW=2240KiB/s (2293kB/s)(21.9MiB/10014msec) 00:30:48.416 slat (nsec): min=6295, max=86985, avg=21716.16, stdev=17562.76 00:30:48.416 clat (usec): min=9421, max=53153, avg=28436.73, stdev=6235.25 00:30:48.416 lat (usec): min=9436, max=53172, avg=28458.45, stdev=6233.73 00:30:48.416 clat percentiles (usec): 00:30:48.416 | 1.00th=[17433], 5.00th=[22414], 10.00th=[23200], 20.00th=[24249], 00:30:48.416 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26346], 60.00th=[27395], 00:30:48.416 | 70.00th=[29230], 80.00th=[32637], 90.00th=[37487], 95.00th=[42730], 00:30:48.416 | 99.00th=[47973], 99.50th=[50070], 99.90th=[51643], 99.95th=[53216], 00:30:48.416 | 99.99th=[53216] 00:30:48.416 bw ( KiB/s): min= 1992, max= 2688, per=3.85%, avg=2243.79, stdev=170.42, samples=19 00:30:48.416 iops : min= 498, max= 672, avg=560.95, stdev=42.61, samples=19 00:30:48.416 lat (msec) : 10=0.05%, 20=2.18%, 50=97.31%, 100=0.46% 00:30:48.416 cpu : usr=98.51%, sys=0.95%, ctx=208, majf=0, minf=62 00:30:48.416 IO depths : 1=0.6%, 2=1.4%, 4=9.5%, 8=74.7%, 16=13.8%, 32=0.0%, >=64=0.0% 00:30:48.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.416 complete : 0=0.0%, 4=90.9%, 8=4.9%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.416 issued rwts: total=5607,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:48.416 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:48.416 filename2: (groupid=0, jobs=1): err= 0: pid=1632906: Fri Jul 26 11:19:06 2024 00:30:48.416 read: IOPS=523, BW=2092KiB/s (2143kB/s)(20.4MiB/10002msec) 00:30:48.416 slat (nsec): min=6315, max=99032, avg=27828.08, stdev=18695.21 00:30:48.416 clat (usec): min=7196, max=72443, avg=30435.75, stdev=7882.34 00:30:48.416 lat (usec): min=7204, max=72456, avg=30463.58, stdev=7881.81 00:30:48.416 clat percentiles (usec): 00:30:48.416 | 1.00th=[14746], 5.00th=[22938], 10.00th=[23987], 20.00th=[24773], 00:30:48.416 | 30.00th=[25297], 40.00th=[26346], 50.00th=[28181], 60.00th=[30016], 00:30:48.416 | 70.00th=[32375], 80.00th=[36439], 90.00th=[43779], 95.00th=[46400], 00:30:48.416 | 99.00th=[51119], 99.50th=[52691], 99.90th=[69731], 99.95th=[72877], 00:30:48.416 | 99.99th=[72877] 00:30:48.416 bw ( KiB/s): min= 1792, max= 2560, per=3.59%, avg=2092.32, stdev=208.24, samples=19 00:30:48.416 iops : min= 448, max= 640, avg=523.00, stdev=52.06, samples=19 00:30:48.416 lat (msec) : 10=0.44%, 20=2.18%, 50=95.66%, 100=1.72% 00:30:48.416 cpu : usr=97.13%, sys=1.53%, ctx=62, majf=0, minf=84 00:30:48.416 IO depths : 1=0.3%, 2=1.0%, 4=8.5%, 8=76.0%, 16=14.2%, 32=0.0%, >=64=0.0% 00:30:48.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.416 complete : 0=0.0%, 4=90.6%, 8=5.4%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.416 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:48.416 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:48.416 filename2: (groupid=0, jobs=1): err= 0: pid=1632907: Fri Jul 26 11:19:06 2024 00:30:48.416 read: IOPS=649, BW=2599KiB/s (2661kB/s)(25.4MiB/10018msec) 00:30:48.416 slat (usec): min=6, max=144, avg=25.33, stdev=16.84 00:30:48.416 clat (usec): min=4331, max=46517, avg=24428.87, stdev=3766.02 00:30:48.416 lat (usec): min=4344, max=46527, avg=24454.20, stdev=3767.25 00:30:48.416 clat percentiles (usec): 00:30:48.416 | 1.00th=[10421], 5.00th=[16909], 10.00th=[22414], 20.00th=[23462], 00:30:48.417 | 30.00th=[23987], 40.00th=[24511], 50.00th=[24773], 60.00th=[25035], 00:30:48.417 | 70.00th=[25560], 80.00th=[26084], 90.00th=[27132], 95.00th=[27919], 00:30:48.417 | 99.00th=[35914], 99.50th=[40633], 99.90th=[45876], 99.95th=[46400], 00:30:48.417 | 99.99th=[46400] 00:30:48.417 bw ( KiB/s): min= 2432, max= 2920, per=4.45%, avg=2596.40, stdev=109.49, samples=20 00:30:48.417 iops : min= 608, max= 730, avg=649.10, stdev=27.37, samples=20 00:30:48.417 lat (msec) : 10=0.68%, 20=6.19%, 50=93.13% 00:30:48.417 cpu : usr=98.50%, sys=1.04%, ctx=61, majf=0, minf=85 00:30:48.417 IO depths : 1=4.9%, 2=9.9%, 4=22.0%, 8=55.2%, 16=7.9%, 32=0.0%, >=64=0.0% 00:30:48.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.417 complete : 0=0.0%, 4=93.7%, 8=0.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.417 issued rwts: total=6509,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:48.417 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:48.417 filename2: (groupid=0, jobs=1): err= 0: pid=1632908: Fri Jul 26 11:19:06 2024 00:30:48.417 read: IOPS=608, BW=2433KiB/s (2492kB/s)(23.8MiB/10015msec) 00:30:48.417 slat (nsec): min=6257, max=89756, avg=22940.92, stdev=15579.01 00:30:48.417 clat (usec): min=8551, max=50960, avg=26162.21, stdev=4219.40 00:30:48.417 lat (usec): min=8560, max=50979, avg=26185.16, stdev=4219.67 00:30:48.417 clat percentiles (usec): 00:30:48.417 | 1.00th=[15401], 5.00th=[22152], 10.00th=[23200], 20.00th=[23987], 00:30:48.417 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25297], 60.00th=[25560], 00:30:48.417 | 70.00th=[26346], 80.00th=[27395], 90.00th=[31065], 95.00th=[34866], 00:30:48.417 | 99.00th=[41681], 99.50th=[45351], 99.90th=[50594], 99.95th=[51119], 00:30:48.417 | 99.99th=[51119] 00:30:48.417 bw ( KiB/s): min= 2152, max= 2688, per=4.17%, avg=2432.00, stdev=128.66, samples=20 00:30:48.417 iops : min= 538, max= 672, avg=608.00, stdev=32.16, samples=20 00:30:48.417 lat (msec) : 10=0.07%, 20=2.54%, 50=97.23%, 100=0.16% 00:30:48.417 cpu : usr=98.19%, sys=1.24%, ctx=64, majf=0, minf=101 00:30:48.417 IO depths : 1=0.7%, 2=1.7%, 4=12.5%, 8=72.2%, 16=12.9%, 32=0.0%, >=64=0.0% 00:30:48.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.417 complete : 0=0.0%, 4=91.8%, 8=3.5%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.417 issued rwts: total=6092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:48.417 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:48.417 00:30:48.417 Run status group 0 (all jobs): 00:30:48.417 READ: bw=56.9MiB/s (59.7MB/s), 2092KiB/s-2599KiB/s (2143kB/s-2661kB/s), io=572MiB (600MB), run=10002-10054msec 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:48.417 bdev_null0 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:48.417 [2024-07-26 11:19:06.339336] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:48.417 bdev_null1 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:48.417 11:19:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:48.418 { 00:30:48.418 "params": { 00:30:48.418 "name": "Nvme$subsystem", 00:30:48.418 "trtype": "$TEST_TRANSPORT", 00:30:48.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:48.418 "adrfam": "ipv4", 00:30:48.418 "trsvcid": "$NVMF_PORT", 00:30:48.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:48.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:48.418 "hdgst": ${hdgst:-false}, 00:30:48.418 "ddgst": ${ddgst:-false} 00:30:48.418 }, 00:30:48.418 "method": "bdev_nvme_attach_controller" 00:30:48.418 } 00:30:48.418 EOF 00:30:48.418 )") 00:30:48.418 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:48.418 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:48.418 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:48.418 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:48.418 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:48.418 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:48.418 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:48.418 11:19:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:48.418 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:48.418 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:48.418 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:48.418 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:48.418 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:48.418 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:48.418 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:48.418 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:48.418 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:48.418 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:48.418 11:19:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:48.418 11:19:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:48.418 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:48.418 11:19:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:48.418 { 00:30:48.418 "params": { 00:30:48.418 "name": "Nvme$subsystem", 00:30:48.418 "trtype": "$TEST_TRANSPORT", 00:30:48.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:48.418 "adrfam": "ipv4", 00:30:48.418 "trsvcid": "$NVMF_PORT", 00:30:48.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:48.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:48.418 "hdgst": ${hdgst:-false}, 00:30:48.418 "ddgst": ${ddgst:-false} 00:30:48.418 }, 00:30:48.418 "method": "bdev_nvme_attach_controller" 00:30:48.418 } 00:30:48.418 EOF 00:30:48.418 )") 00:30:48.418 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:48.418 11:19:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:48.418 11:19:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:48.418 11:19:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:48.418 11:19:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:48.418 "params": { 00:30:48.418 "name": "Nvme0", 00:30:48.418 "trtype": "tcp", 00:30:48.418 "traddr": "10.0.0.2", 00:30:48.418 "adrfam": "ipv4", 00:30:48.418 "trsvcid": "4420", 00:30:48.418 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:48.418 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:48.418 "hdgst": false, 00:30:48.418 "ddgst": false 00:30:48.418 }, 00:30:48.418 "method": "bdev_nvme_attach_controller" 00:30:48.418 },{ 00:30:48.418 "params": { 00:30:48.418 "name": "Nvme1", 00:30:48.418 "trtype": "tcp", 00:30:48.418 "traddr": "10.0.0.2", 00:30:48.418 "adrfam": "ipv4", 00:30:48.418 "trsvcid": "4420", 00:30:48.418 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:48.418 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:48.418 "hdgst": false, 00:30:48.418 "ddgst": false 00:30:48.418 }, 00:30:48.418 "method": "bdev_nvme_attach_controller" 00:30:48.418 }' 00:30:48.418 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:48.418 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:48.418 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:48.418 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:48.418 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:48.418 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:48.418 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:48.418 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:48.418 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:48.418 11:19:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:48.418 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:48.418 ... 00:30:48.418 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:48.418 ... 00:30:48.418 fio-3.35 00:30:48.418 Starting 4 threads 00:30:48.418 EAL: No free 2048 kB hugepages reported on node 1 00:30:53.688 00:30:53.688 filename0: (groupid=0, jobs=1): err= 0: pid=1634855: Fri Jul 26 11:19:12 2024 00:30:53.688 read: IOPS=665, BW=5325KiB/s (5453kB/s)(26.0MiB/5009msec) 00:30:53.688 slat (nsec): min=4204, max=25436, avg=8446.63, stdev=2752.55 00:30:53.688 clat (usec): min=3989, max=22238, avg=11985.16, stdev=2026.35 00:30:53.688 lat (usec): min=3995, max=22258, avg=11993.61, stdev=2026.13 00:30:53.688 clat percentiles (usec): 00:30:53.688 | 1.00th=[ 7439], 5.00th=[ 9503], 10.00th=[10421], 20.00th=[10945], 00:30:53.688 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11600], 60.00th=[11731], 00:30:53.688 | 70.00th=[11994], 80.00th=[12125], 90.00th=[15270], 95.00th=[16909], 00:30:53.688 | 99.00th=[17957], 99.50th=[18220], 99.90th=[19006], 99.95th=[22152], 00:30:53.688 | 99.99th=[22152] 00:30:53.688 bw ( KiB/s): min= 5120, max= 5504, per=24.83%, avg=5329.60, stdev=150.18, samples=10 00:30:53.688 iops : min= 640, max= 688, avg=666.20, stdev=18.77, samples=10 00:30:53.688 lat (msec) : 4=0.06%, 10=7.47%, 20=92.41%, 50=0.06% 00:30:53.688 cpu : usr=97.38%, sys=2.28%, ctx=7, majf=0, minf=0 00:30:53.688 IO depths : 1=0.3%, 2=4.1%, 4=68.5%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:53.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.688 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.688 issued rwts: total=3334,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:53.688 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:53.688 filename0: (groupid=0, jobs=1): err= 0: pid=1634856: Fri Jul 26 11:19:12 2024 00:30:53.688 read: IOPS=683, BW=5468KiB/s (5599kB/s)(26.7MiB/5007msec) 00:30:53.688 slat (nsec): min=4185, max=22388, avg=8643.15, stdev=2800.83 00:30:53.688 clat (usec): min=6380, max=20049, avg=11677.11, stdev=1335.61 00:30:53.688 lat (usec): min=6387, max=20061, avg=11685.75, stdev=1335.65 00:30:53.688 clat percentiles (usec): 00:30:53.688 | 1.00th=[ 8455], 5.00th=[10028], 10.00th=[10552], 20.00th=[11076], 00:30:53.688 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11600], 60.00th=[11731], 00:30:53.688 | 70.00th=[11863], 80.00th=[11994], 90.00th=[13042], 95.00th=[14353], 00:30:53.688 | 99.00th=[16712], 99.50th=[17695], 99.90th=[20055], 99.95th=[20055], 00:30:53.688 | 99.99th=[20055] 00:30:53.688 bw ( KiB/s): min= 5040, max= 5632, per=25.45%, avg=5462.40, stdev=199.30, samples=10 00:30:53.688 iops : min= 630, max= 704, avg=682.80, stdev=24.91, samples=10 00:30:53.688 lat (msec) : 10=5.14%, 20=94.62%, 50=0.23% 00:30:53.689 cpu : usr=97.56%, sys=2.14%, ctx=8, majf=0, minf=9 00:30:53.689 IO depths : 1=0.6%, 2=3.5%, 4=68.3%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:53.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.689 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.689 issued rwts: total=3422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:53.689 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:53.689 filename1: (groupid=0, jobs=1): err= 0: pid=1634857: Fri Jul 26 11:19:12 2024 00:30:53.689 read: IOPS=668, BW=5350KiB/s (5478kB/s)(26.1MiB/5005msec) 00:30:53.689 slat (nsec): min=6333, max=28154, avg=15136.34, stdev=3377.82 00:30:53.689 clat (usec): min=7143, max=19701, avg=11916.27, stdev=1868.99 00:30:53.689 lat (usec): min=7157, max=19720, avg=11931.41, stdev=1868.77 00:30:53.689 clat percentiles (usec): 00:30:53.689 | 1.00th=[ 8094], 5.00th=[ 9765], 10.00th=[10683], 20.00th=[10945], 00:30:53.689 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11600], 60.00th=[11731], 00:30:53.689 | 70.00th=[11863], 80.00th=[12125], 90.00th=[14222], 95.00th=[16712], 00:30:53.689 | 99.00th=[18220], 99.50th=[18744], 99.90th=[19530], 99.95th=[19792], 00:30:53.689 | 99.99th=[19792] 00:30:53.689 bw ( KiB/s): min= 5104, max= 5632, per=24.74%, avg=5310.22, stdev=189.73, samples=9 00:30:53.689 iops : min= 638, max= 704, avg=663.78, stdev=23.72, samples=9 00:30:53.689 lat (msec) : 10=6.10%, 20=93.90% 00:30:53.689 cpu : usr=97.18%, sys=2.46%, ctx=7, majf=0, minf=0 00:30:53.689 IO depths : 1=0.1%, 2=1.6%, 4=71.0%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:53.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.689 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.689 issued rwts: total=3347,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:53.689 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:53.689 filename1: (groupid=0, jobs=1): err= 0: pid=1634858: Fri Jul 26 11:19:12 2024 00:30:53.689 read: IOPS=666, BW=5335KiB/s (5463kB/s)(26.1MiB/5004msec) 00:30:53.689 slat (nsec): min=6053, max=27086, avg=8317.22, stdev=2719.89 00:30:53.689 clat (usec): min=6743, max=21142, avg=11967.55, stdev=1670.62 00:30:53.689 lat (usec): min=6754, max=21148, avg=11975.87, stdev=1670.45 00:30:53.689 clat percentiles (usec): 00:30:53.689 | 1.00th=[ 8848], 5.00th=[10290], 10.00th=[10683], 20.00th=[11076], 00:30:53.689 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11731], 60.00th=[11863], 00:30:53.689 | 70.00th=[11994], 80.00th=[12125], 90.00th=[14091], 95.00th=[16450], 00:30:53.689 | 99.00th=[17695], 99.50th=[18744], 99.90th=[19530], 99.95th=[21103], 00:30:53.689 | 99.99th=[21103] 00:30:53.689 bw ( KiB/s): min= 5168, max= 5552, per=24.84%, avg=5332.20, stdev=148.21, samples=10 00:30:53.689 iops : min= 646, max= 694, avg=666.50, stdev=18.54, samples=10 00:30:53.689 lat (msec) : 10=3.66%, 20=96.28%, 50=0.06% 00:30:53.689 cpu : usr=97.76%, sys=1.94%, ctx=7, majf=0, minf=9 00:30:53.689 IO depths : 1=0.7%, 2=4.7%, 4=67.8%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:53.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.689 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.689 issued rwts: total=3337,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:53.689 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:53.689 00:30:53.689 Run status group 0 (all jobs): 00:30:53.689 READ: bw=21.0MiB/s (22.0MB/s), 5325KiB/s-5468KiB/s (5453kB/s-5599kB/s), io=105MiB (110MB), run=5004-5009msec 00:30:53.689 11:19:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:30:53.689 11:19:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:53.689 11:19:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:53.689 11:19:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:53.689 11:19:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:53.689 11:19:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:53.689 11:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.689 11:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:53.689 11:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.689 11:19:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:53.689 11:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.689 11:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:53.689 11:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.689 11:19:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:53.689 11:19:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:53.689 11:19:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:53.689 11:19:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:53.689 11:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.689 11:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:53.689 11:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.689 11:19:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:53.689 11:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.689 11:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:53.689 11:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.689 00:30:53.689 real 0m24.262s 00:30:53.689 user 4m49.528s 00:30:53.689 sys 0m5.127s 00:30:53.689 11:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:53.689 11:19:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:53.689 ************************************ 00:30:53.689 END TEST fio_dif_rand_params 00:30:53.689 ************************************ 00:30:53.689 11:19:12 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:30:53.689 11:19:12 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:53.689 11:19:12 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:53.689 11:19:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:53.689 ************************************ 00:30:53.689 START TEST fio_dif_digest 00:30:53.689 ************************************ 00:30:53.689 11:19:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:30:53.689 11:19:12 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:30:53.689 11:19:12 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:30:53.689 11:19:12 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:30:53.689 11:19:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:30:53.689 11:19:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:30:53.689 11:19:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:30:53.689 11:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:30:53.689 11:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:53.690 bdev_null0 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:53.690 [2024-07-26 11:19:13.030428] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:53.690 { 00:30:53.690 "params": { 00:30:53.690 "name": "Nvme$subsystem", 00:30:53.690 "trtype": "$TEST_TRANSPORT", 00:30:53.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:53.690 "adrfam": "ipv4", 00:30:53.690 "trsvcid": "$NVMF_PORT", 00:30:53.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:53.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:53.690 "hdgst": ${hdgst:-false}, 00:30:53.690 "ddgst": ${ddgst:-false} 00:30:53.690 }, 00:30:53.690 "method": "bdev_nvme_attach_controller" 00:30:53.690 } 00:30:53.690 EOF 00:30:53.690 )") 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:53.690 "params": { 00:30:53.690 "name": "Nvme0", 00:30:53.690 "trtype": "tcp", 00:30:53.690 "traddr": "10.0.0.2", 00:30:53.690 "adrfam": "ipv4", 00:30:53.690 "trsvcid": "4420", 00:30:53.690 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:53.690 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:53.690 "hdgst": true, 00:30:53.690 "ddgst": true 00:30:53.690 }, 00:30:53.690 "method": "bdev_nvme_attach_controller" 00:30:53.690 }' 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:53.690 11:19:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:53.949 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:53.949 ... 00:30:53.949 fio-3.35 00:30:53.949 Starting 3 threads 00:30:53.949 EAL: No free 2048 kB hugepages reported on node 1 00:31:06.152 00:31:06.152 filename0: (groupid=0, jobs=1): err= 0: pid=1635961: Fri Jul 26 11:19:23 2024 00:31:06.152 read: IOPS=209, BW=26.2MiB/s (27.5MB/s)(263MiB/10045msec) 00:31:06.152 slat (nsec): min=6512, max=45739, avg=11324.33, stdev=2273.92 00:31:06.152 clat (usec): min=5975, max=59315, avg=14268.09, stdev=11591.69 00:31:06.152 lat (usec): min=5983, max=59327, avg=14279.41, stdev=11591.85 00:31:06.152 clat percentiles (usec): 00:31:06.152 | 1.00th=[ 6325], 5.00th=[ 6980], 10.00th=[ 7898], 20.00th=[ 8979], 00:31:06.152 | 30.00th=[10028], 40.00th=[10814], 50.00th=[11338], 60.00th=[11731], 00:31:06.152 | 70.00th=[12387], 80.00th=[13173], 90.00th=[15926], 95.00th=[51643], 00:31:06.152 | 99.00th=[55313], 99.50th=[57410], 99.90th=[58983], 99.95th=[58983], 00:31:06.152 | 99.99th=[59507] 00:31:06.152 bw ( KiB/s): min=21760, max=32256, per=32.22%, avg=26944.00, stdev=3152.76, samples=20 00:31:06.152 iops : min= 170, max= 252, avg=210.50, stdev=24.63, samples=20 00:31:06.152 lat (msec) : 10=30.33%, 20=61.46%, 50=0.90%, 100=7.31% 00:31:06.152 cpu : usr=95.53%, sys=4.04%, ctx=15, majf=0, minf=114 00:31:06.152 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:06.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:06.152 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:06.152 issued rwts: total=2107,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:06.152 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:06.152 filename0: (groupid=0, jobs=1): err= 0: pid=1635962: Fri Jul 26 11:19:23 2024 00:31:06.152 read: IOPS=249, BW=31.2MiB/s (32.7MB/s)(313MiB/10048msec) 00:31:06.152 slat (nsec): min=4293, max=55987, avg=10890.49, stdev=2466.59 00:31:06.152 clat (usec): min=5877, max=55418, avg=11997.13, stdev=9194.92 00:31:06.152 lat (usec): min=5886, max=55427, avg=12008.02, stdev=9195.12 00:31:06.152 clat percentiles (usec): 00:31:06.152 | 1.00th=[ 6259], 5.00th=[ 6587], 10.00th=[ 7177], 20.00th=[ 8094], 00:31:06.152 | 30.00th=[ 8717], 40.00th=[ 9503], 50.00th=[10159], 60.00th=[10683], 00:31:06.152 | 70.00th=[11338], 80.00th=[11994], 90.00th=[13304], 95.00th=[19006], 00:31:06.152 | 99.00th=[53216], 99.50th=[53740], 99.90th=[55313], 99.95th=[55313], 00:31:06.152 | 99.99th=[55313] 00:31:06.152 bw ( KiB/s): min=24320, max=39168, per=38.32%, avg=32051.20, stdev=3773.04, samples=20 00:31:06.152 iops : min= 190, max= 306, avg=250.40, stdev=29.48, samples=20 00:31:06.152 lat (msec) : 10=47.21%, 20=47.81%, 50=1.00%, 100=3.99% 00:31:06.152 cpu : usr=95.27%, sys=4.31%, ctx=13, majf=0, minf=141 00:31:06.152 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:06.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:06.152 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:06.152 issued rwts: total=2506,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:06.152 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:06.152 filename0: (groupid=0, jobs=1): err= 0: pid=1635963: Fri Jul 26 11:19:23 2024 00:31:06.152 read: IOPS=194, BW=24.3MiB/s (25.5MB/s)(244MiB/10047msec) 00:31:06.152 slat (usec): min=4, max=291, avg=11.38, stdev= 6.67 00:31:06.152 clat (usec): min=6085, max=97878, avg=15404.84, stdev=13459.46 00:31:06.152 lat (usec): min=6093, max=97891, avg=15416.22, stdev=13459.56 00:31:06.152 clat percentiles (usec): 00:31:06.152 | 1.00th=[ 6325], 5.00th=[ 6849], 10.00th=[ 7963], 20.00th=[ 9110], 00:31:06.152 | 30.00th=[10159], 40.00th=[10814], 50.00th=[11338], 60.00th=[11863], 00:31:06.152 | 70.00th=[12518], 80.00th=[13566], 90.00th=[49546], 95.00th=[52691], 00:31:06.152 | 99.00th=[56886], 99.50th=[58459], 99.90th=[95945], 99.95th=[98042], 00:31:06.152 | 99.99th=[98042] 00:31:06.152 bw ( KiB/s): min=19968, max=31488, per=29.85%, avg=24960.00, stdev=3559.37, samples=20 00:31:06.152 iops : min= 156, max= 246, avg=195.00, stdev=27.81, samples=20 00:31:06.152 lat (msec) : 10=28.59%, 20=60.76%, 50=1.18%, 100=9.48% 00:31:06.152 cpu : usr=95.53%, sys=4.09%, ctx=13, majf=0, minf=137 00:31:06.152 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:06.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:06.152 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:06.152 issued rwts: total=1952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:06.152 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:06.152 00:31:06.152 Run status group 0 (all jobs): 00:31:06.152 READ: bw=81.7MiB/s (85.6MB/s), 24.3MiB/s-31.2MiB/s (25.5MB/s-32.7MB/s), io=821MiB (860MB), run=10045-10048msec 00:31:06.152 11:19:24 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:06.152 11:19:24 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:31:06.152 11:19:24 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:31:06.152 11:19:24 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:06.152 11:19:24 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:31:06.152 11:19:24 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:06.152 11:19:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.152 11:19:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:06.152 11:19:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.152 11:19:24 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:06.152 11:19:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:06.152 11:19:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:06.152 11:19:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:06.152 00:31:06.152 real 0m11.155s 00:31:06.152 user 0m35.737s 00:31:06.152 sys 0m1.516s 00:31:06.152 11:19:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:06.152 11:19:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:06.152 ************************************ 00:31:06.152 END TEST fio_dif_digest 00:31:06.152 ************************************ 00:31:06.152 11:19:24 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:06.152 11:19:24 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:31:06.152 11:19:24 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:06.152 11:19:24 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:31:06.152 11:19:24 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:06.153 11:19:24 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:31:06.153 11:19:24 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:06.153 11:19:24 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:06.153 rmmod nvme_tcp 00:31:06.153 rmmod nvme_fabrics 00:31:06.153 rmmod nvme_keyring 00:31:06.153 11:19:24 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:06.153 11:19:24 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:31:06.153 11:19:24 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:31:06.153 11:19:24 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1627376 ']' 00:31:06.153 11:19:24 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1627376 00:31:06.153 11:19:24 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 1627376 ']' 00:31:06.153 11:19:24 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 1627376 00:31:06.153 11:19:24 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:31:06.153 11:19:24 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:06.153 11:19:24 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1627376 00:31:06.153 11:19:24 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:06.153 11:19:24 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:06.153 11:19:24 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1627376' 00:31:06.153 killing process with pid 1627376 00:31:06.153 11:19:24 nvmf_dif -- common/autotest_common.sh@969 -- # kill 1627376 00:31:06.153 11:19:24 nvmf_dif -- common/autotest_common.sh@974 -- # wait 1627376 00:31:06.153 11:19:24 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:06.153 11:19:24 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:07.533 Waiting for block devices as requested 00:31:07.533 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:31:07.533 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:07.792 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:07.792 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:07.792 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:07.792 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:08.051 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:08.051 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:08.051 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:08.051 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:08.311 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:08.311 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:08.311 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:08.570 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:08.570 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:08.570 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:08.570 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:08.830 11:19:28 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:08.830 11:19:28 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:08.830 11:19:28 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:08.830 11:19:28 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:08.830 11:19:28 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:08.830 11:19:28 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:08.830 11:19:28 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:10.803 11:19:30 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:10.803 00:31:10.803 real 1m12.999s 00:31:10.803 user 7m7.551s 00:31:10.803 sys 0m18.346s 00:31:10.803 11:19:30 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:10.803 11:19:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:10.803 ************************************ 00:31:10.803 END TEST nvmf_dif 00:31:10.803 ************************************ 00:31:10.803 11:19:30 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:10.803 11:19:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:10.803 11:19:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:10.803 11:19:30 -- common/autotest_common.sh@10 -- # set +x 00:31:10.803 ************************************ 00:31:10.803 START TEST nvmf_abort_qd_sizes 00:31:10.803 ************************************ 00:31:10.803 11:19:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:11.063 * Looking for test storage... 00:31:11.063 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:31:11.063 11:19:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:16.336 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:16.336 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:31:16.336 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:16.336 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:16.336 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:16.336 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:16.336 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:16.336 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:31:16.336 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:16.336 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:31:16.336 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:31:16.336 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:31:16.336 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:31:16.336 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:31:16.336 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:31:16.336 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:16.336 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:16.336 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:16.336 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:16.336 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:16.336 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:16.336 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:16.336 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:16.336 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:16.336 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:16.336 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:16.336 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:16.336 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:16.336 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:16.336 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:16.336 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:16.336 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:16.336 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:16.336 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:16.336 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:16.336 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:16.336 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:16.336 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:16.336 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:16.336 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:16.336 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:16.337 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:16.337 Found net devices under 0000:86:00.0: cvl_0_0 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:16.337 Found net devices under 0000:86:00.1: cvl_0_1 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:16.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:16.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:31:16.337 00:31:16.337 --- 10.0.0.2 ping statistics --- 00:31:16.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:16.337 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:16.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:16.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.348 ms 00:31:16.337 00:31:16.337 --- 10.0.0.1 ping statistics --- 00:31:16.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:16.337 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:31:16.337 11:19:35 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:18.873 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:18.873 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:18.873 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:18.873 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:18.873 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:18.873 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:18.873 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:18.873 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:18.873 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:18.873 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:18.873 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:18.873 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:18.873 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:18.873 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:18.873 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:18.873 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:19.812 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:31:19.812 11:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:19.812 11:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:19.812 11:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:19.812 11:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:19.812 11:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:19.812 11:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:19.812 11:19:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:31:19.812 11:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:19.812 11:19:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:19.812 11:19:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:19.812 11:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1643824 00:31:19.812 11:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1643824 00:31:19.812 11:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:31:19.812 11:19:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 1643824 ']' 00:31:19.812 11:19:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:19.812 11:19:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:19.812 11:19:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:19.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:19.812 11:19:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:19.812 11:19:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:19.812 [2024-07-26 11:19:39.301669] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:19.812 [2024-07-26 11:19:39.301714] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:20.071 EAL: No free 2048 kB hugepages reported on node 1 00:31:20.071 [2024-07-26 11:19:39.359311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:20.071 [2024-07-26 11:19:39.440719] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:20.071 [2024-07-26 11:19:39.440756] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:20.071 [2024-07-26 11:19:39.440766] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:20.071 [2024-07-26 11:19:39.440773] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:20.071 [2024-07-26 11:19:39.440779] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:20.071 [2024-07-26 11:19:39.440819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:20.071 [2024-07-26 11:19:39.440876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:20.071 [2024-07-26 11:19:39.440958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:20.071 [2024-07-26 11:19:39.440962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:20.637 11:19:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:20.637 11:19:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:31:20.637 11:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:20.637 11:19:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:20.637 11:19:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:20.896 11:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:20.896 11:19:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:31:20.896 11:19:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:31:20.896 11:19:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:31:20.896 11:19:40 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:31:20.896 11:19:40 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:31:20.896 11:19:40 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:5e:00.0 ]] 00:31:20.896 11:19:40 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:31:20.896 11:19:40 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:31:20.896 11:19:40 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:31:20.896 11:19:40 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:31:20.896 11:19:40 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:31:20.896 11:19:40 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:31:20.896 11:19:40 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:31:20.896 11:19:40 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:5e:00.0 00:31:20.897 11:19:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:31:20.897 11:19:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:31:20.897 11:19:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:31:20.897 11:19:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:20.897 11:19:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:20.897 11:19:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:20.897 ************************************ 00:31:20.897 START TEST spdk_target_abort 00:31:20.897 ************************************ 00:31:20.897 11:19:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:31:20.897 11:19:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:31:20.897 11:19:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:31:20.897 11:19:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.897 11:19:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:24.182 spdk_targetn1 00:31:24.182 11:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.182 11:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:24.182 11:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.182 11:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:24.182 [2024-07-26 11:19:43.035873] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:24.182 11:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.182 11:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:31:24.182 11:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.182 11:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:24.182 11:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.182 11:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:31:24.182 11:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.182 11:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:24.182 11:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.182 11:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:31:24.182 11:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:24.182 11:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:24.182 [2024-07-26 11:19:43.068793] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:24.182 11:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:24.182 11:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:31:24.182 11:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:24.182 11:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:24.182 11:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:31:24.182 11:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:24.182 11:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:24.182 11:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:24.182 11:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:24.182 11:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:24.182 11:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:24.182 11:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:24.182 11:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:24.183 11:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:24.183 11:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:24.183 11:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:31:24.183 11:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:24.183 11:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:24.183 11:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:24.183 11:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:24.183 11:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:24.183 11:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:24.183 EAL: No free 2048 kB hugepages reported on node 1 00:31:27.465 Initializing NVMe Controllers 00:31:27.465 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:27.465 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:27.465 Initialization complete. Launching workers. 00:31:27.465 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 4721, failed: 0 00:31:27.465 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1574, failed to submit 3147 00:31:27.465 success 913, unsuccess 661, failed 0 00:31:27.465 11:19:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:27.465 11:19:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:27.465 EAL: No free 2048 kB hugepages reported on node 1 00:31:30.743 Initializing NVMe Controllers 00:31:30.743 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:30.743 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:30.743 Initialization complete. Launching workers. 00:31:30.743 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8747, failed: 0 00:31:30.743 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1266, failed to submit 7481 00:31:30.743 success 316, unsuccess 950, failed 0 00:31:30.743 11:19:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:30.743 11:19:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:30.743 EAL: No free 2048 kB hugepages reported on node 1 00:31:33.271 Initializing NVMe Controllers 00:31:33.271 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:33.271 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:33.271 Initialization complete. Launching workers. 00:31:33.271 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33297, failed: 0 00:31:33.271 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2766, failed to submit 30531 00:31:33.271 success 685, unsuccess 2081, failed 0 00:31:33.271 11:19:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:31:33.271 11:19:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.271 11:19:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:33.271 11:19:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.271 11:19:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:31:33.271 11:19:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.271 11:19:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:34.645 11:19:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:34.645 11:19:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1643824 00:31:34.645 11:19:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 1643824 ']' 00:31:34.645 11:19:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 1643824 00:31:34.645 11:19:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:31:34.645 11:19:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:34.645 11:19:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1643824 00:31:34.645 11:19:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:34.645 11:19:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:34.645 11:19:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1643824' 00:31:34.645 killing process with pid 1643824 00:31:34.645 11:19:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 1643824 00:31:34.645 11:19:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 1643824 00:31:34.905 00:31:34.905 real 0m14.046s 00:31:34.905 user 0m56.124s 00:31:34.905 sys 0m2.133s 00:31:34.905 11:19:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:34.905 11:19:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:34.905 ************************************ 00:31:34.905 END TEST spdk_target_abort 00:31:34.905 ************************************ 00:31:34.905 11:19:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:31:34.905 11:19:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:34.905 11:19:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:34.905 11:19:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:34.905 ************************************ 00:31:34.905 START TEST kernel_target_abort 00:31:34.905 ************************************ 00:31:34.905 11:19:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:31:34.905 11:19:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:31:34.905 11:19:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:31:34.905 11:19:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:34.905 11:19:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:34.905 11:19:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:34.905 11:19:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:34.905 11:19:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:34.905 11:19:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:34.906 11:19:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:34.906 11:19:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:34.906 11:19:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:34.906 11:19:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:34.906 11:19:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:34.906 11:19:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:34.906 11:19:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:34.906 11:19:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:34.906 11:19:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:34.906 11:19:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:31:34.906 11:19:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:34.906 11:19:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:34.906 11:19:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:34.906 11:19:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:37.494 Waiting for block devices as requested 00:31:37.494 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:31:37.494 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:37.760 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:37.760 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:37.760 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:37.760 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:38.020 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:38.020 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:38.020 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:38.020 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:38.279 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:38.279 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:38.279 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:38.538 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:38.538 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:38.538 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:38.538 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:38.797 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:38.797 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:38.797 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:38.797 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:31:38.797 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:38.797 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:38.797 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:38.797 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:38.797 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:38.797 No valid GPT data, bailing 00:31:38.797 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:38.797 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:31:38.797 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:31:38.797 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:38.797 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:38.797 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:38.797 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:38.798 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:38.798 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:38.798 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:31:38.798 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:38.798 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:31:38.798 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:38.798 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:31:38.798 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:31:38.798 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:31:38.798 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:38.798 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:31:38.798 00:31:38.798 Discovery Log Number of Records 2, Generation counter 2 00:31:38.798 =====Discovery Log Entry 0====== 00:31:38.798 trtype: tcp 00:31:38.798 adrfam: ipv4 00:31:38.798 subtype: current discovery subsystem 00:31:38.798 treq: not specified, sq flow control disable supported 00:31:38.798 portid: 1 00:31:38.798 trsvcid: 4420 00:31:38.798 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:38.798 traddr: 10.0.0.1 00:31:38.798 eflags: none 00:31:38.798 sectype: none 00:31:38.798 =====Discovery Log Entry 1====== 00:31:38.798 trtype: tcp 00:31:38.798 adrfam: ipv4 00:31:38.798 subtype: nvme subsystem 00:31:38.798 treq: not specified, sq flow control disable supported 00:31:38.798 portid: 1 00:31:38.798 trsvcid: 4420 00:31:38.798 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:38.798 traddr: 10.0.0.1 00:31:38.798 eflags: none 00:31:38.798 sectype: none 00:31:38.798 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:31:38.798 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:38.798 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:38.798 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:31:38.798 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:38.798 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:38.798 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:38.798 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:38.798 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:38.798 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:38.798 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:38.798 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:38.798 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:38.798 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:38.798 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:31:38.798 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:38.798 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:31:38.798 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:38.798 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:38.798 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:38.798 11:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:38.798 EAL: No free 2048 kB hugepages reported on node 1 00:31:42.085 Initializing NVMe Controllers 00:31:42.085 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:42.085 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:42.085 Initialization complete. Launching workers. 00:31:42.085 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 24356, failed: 0 00:31:42.085 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24356, failed to submit 0 00:31:42.085 success 0, unsuccess 24356, failed 0 00:31:42.085 11:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:42.085 11:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:42.085 EAL: No free 2048 kB hugepages reported on node 1 00:31:45.374 Initializing NVMe Controllers 00:31:45.374 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:45.374 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:45.374 Initialization complete. Launching workers. 00:31:45.374 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 53338, failed: 0 00:31:45.374 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 13430, failed to submit 39908 00:31:45.374 success 0, unsuccess 13430, failed 0 00:31:45.374 11:20:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:45.374 11:20:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:45.374 EAL: No free 2048 kB hugepages reported on node 1 00:31:47.944 Initializing NVMe Controllers 00:31:47.944 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:47.944 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:47.944 Initialization complete. Launching workers. 00:31:47.944 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 52656, failed: 0 00:31:47.944 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 13150, failed to submit 39506 00:31:47.944 success 0, unsuccess 13150, failed 0 00:31:47.944 11:20:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:31:47.944 11:20:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:47.944 11:20:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:31:47.944 11:20:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:47.944 11:20:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:47.944 11:20:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:47.944 11:20:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:47.944 11:20:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:31:47.944 11:20:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:31:48.204 11:20:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:50.741 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:50.741 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:50.741 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:50.741 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:50.741 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:50.741 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:50.741 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:50.741 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:50.741 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:50.741 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:50.741 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:50.741 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:50.741 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:50.741 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:50.741 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:50.741 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:51.679 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:31:51.679 00:31:51.679 real 0m16.620s 00:31:51.679 user 0m4.004s 00:31:51.679 sys 0m5.176s 00:31:51.679 11:20:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:51.679 11:20:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:51.679 ************************************ 00:31:51.679 END TEST kernel_target_abort 00:31:51.679 ************************************ 00:31:51.679 11:20:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:51.679 11:20:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:31:51.679 11:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:51.679 11:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:31:51.679 11:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:51.679 11:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:31:51.679 11:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:51.679 11:20:10 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:51.679 rmmod nvme_tcp 00:31:51.679 rmmod nvme_fabrics 00:31:51.679 rmmod nvme_keyring 00:31:51.679 11:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:51.679 11:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:31:51.679 11:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:31:51.679 11:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1643824 ']' 00:31:51.679 11:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1643824 00:31:51.679 11:20:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 1643824 ']' 00:31:51.679 11:20:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 1643824 00:31:51.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1643824) - No such process 00:31:51.679 11:20:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 1643824 is not found' 00:31:51.679 Process with pid 1643824 is not found 00:31:51.679 11:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:51.679 11:20:11 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:54.214 Waiting for block devices as requested 00:31:54.214 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:31:54.214 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:54.214 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:54.214 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:54.214 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:54.474 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:54.474 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:54.474 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:54.474 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:54.733 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:54.733 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:54.733 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:54.992 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:54.992 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:54.992 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:55.250 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:55.251 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:55.251 11:20:14 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:55.251 11:20:14 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:55.251 11:20:14 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:55.251 11:20:14 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:55.251 11:20:14 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:55.251 11:20:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:55.251 11:20:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:57.784 11:20:16 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:57.784 00:31:57.784 real 0m46.442s 00:31:57.784 user 1m4.073s 00:31:57.784 sys 0m15.061s 00:31:57.784 11:20:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:57.784 11:20:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:57.784 ************************************ 00:31:57.784 END TEST nvmf_abort_qd_sizes 00:31:57.784 ************************************ 00:31:57.784 11:20:16 -- spdk/autotest.sh@299 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:31:57.784 11:20:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:57.784 11:20:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:57.784 11:20:16 -- common/autotest_common.sh@10 -- # set +x 00:31:57.784 ************************************ 00:31:57.784 START TEST keyring_file 00:31:57.784 ************************************ 00:31:57.784 11:20:16 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:31:57.784 * Looking for test storage... 00:31:57.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:31:57.784 11:20:16 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:31:57.784 11:20:16 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:57.784 11:20:16 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:31:57.784 11:20:16 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:57.784 11:20:16 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:57.784 11:20:16 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:57.784 11:20:16 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:57.784 11:20:16 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:57.784 11:20:16 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:57.784 11:20:16 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:57.784 11:20:16 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:57.784 11:20:16 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:57.784 11:20:16 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:57.784 11:20:16 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:57.784 11:20:16 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:57.784 11:20:16 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:57.784 11:20:16 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:57.784 11:20:16 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:57.784 11:20:16 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:57.784 11:20:16 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:57.784 11:20:16 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:57.784 11:20:16 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:57.784 11:20:16 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:57.784 11:20:16 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.784 11:20:16 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.785 11:20:16 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.785 11:20:16 keyring_file -- paths/export.sh@5 -- # export PATH 00:31:57.785 11:20:16 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:57.785 11:20:16 keyring_file -- nvmf/common.sh@47 -- # : 0 00:31:57.785 11:20:16 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:57.785 11:20:16 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:57.785 11:20:16 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:57.785 11:20:16 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:57.785 11:20:16 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:57.785 11:20:16 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:57.785 11:20:16 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:57.785 11:20:16 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:57.785 11:20:16 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:31:57.785 11:20:16 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:31:57.785 11:20:16 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:31:57.785 11:20:16 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:31:57.785 11:20:16 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:31:57.785 11:20:16 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:31:57.785 11:20:16 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:57.785 11:20:16 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:57.785 11:20:16 keyring_file -- keyring/common.sh@17 -- # name=key0 00:31:57.785 11:20:16 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:57.785 11:20:16 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:57.785 11:20:16 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:57.785 11:20:16 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ZLAqsvJniM 00:31:57.785 11:20:16 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:57.785 11:20:16 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:57.785 11:20:16 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:57.785 11:20:16 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:57.785 11:20:16 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:57.785 11:20:16 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:57.785 11:20:16 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:57.785 11:20:16 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ZLAqsvJniM 00:31:57.785 11:20:16 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ZLAqsvJniM 00:31:57.785 11:20:16 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.ZLAqsvJniM 00:31:57.785 11:20:16 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:31:57.785 11:20:16 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:57.785 11:20:16 keyring_file -- keyring/common.sh@17 -- # name=key1 00:31:57.785 11:20:16 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:31:57.785 11:20:16 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:57.785 11:20:16 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:57.785 11:20:16 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.dgL8vlo0be 00:31:57.785 11:20:16 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:31:57.785 11:20:16 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:31:57.785 11:20:16 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:57.785 11:20:16 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:57.785 11:20:16 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:31:57.785 11:20:16 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:57.785 11:20:16 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:57.785 11:20:16 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.dgL8vlo0be 00:31:57.785 11:20:16 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.dgL8vlo0be 00:31:57.785 11:20:16 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.dgL8vlo0be 00:31:57.785 11:20:16 keyring_file -- keyring/file.sh@30 -- # tgtpid=1652457 00:31:57.785 11:20:16 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1652457 00:31:57.785 11:20:16 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:31:57.785 11:20:16 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1652457 ']' 00:31:57.785 11:20:16 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:57.785 11:20:16 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:57.785 11:20:16 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:57.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:57.785 11:20:16 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:57.785 11:20:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:57.785 [2024-07-26 11:20:17.033491] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:57.785 [2024-07-26 11:20:17.033542] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1652457 ] 00:31:57.785 EAL: No free 2048 kB hugepages reported on node 1 00:31:57.785 [2024-07-26 11:20:17.088115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:57.785 [2024-07-26 11:20:17.167397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:58.351 11:20:17 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:58.351 11:20:17 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:31:58.351 11:20:17 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:31:58.351 11:20:17 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.351 11:20:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:58.351 [2024-07-26 11:20:17.829145] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:58.610 null0 00:31:58.610 [2024-07-26 11:20:17.861202] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:58.610 [2024-07-26 11:20:17.861485] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:58.610 [2024-07-26 11:20:17.869212] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:31:58.610 11:20:17 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.610 11:20:17 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:58.610 11:20:17 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:31:58.610 11:20:17 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:58.610 11:20:17 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:58.610 11:20:17 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:58.610 11:20:17 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:58.610 11:20:17 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:58.610 11:20:17 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:58.610 11:20:17 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.610 11:20:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:58.610 [2024-07-26 11:20:17.881245] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:31:58.610 request: 00:31:58.610 { 00:31:58.610 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:31:58.610 "secure_channel": false, 00:31:58.610 "listen_address": { 00:31:58.610 "trtype": "tcp", 00:31:58.610 "traddr": "127.0.0.1", 00:31:58.610 "trsvcid": "4420" 00:31:58.610 }, 00:31:58.610 "method": "nvmf_subsystem_add_listener", 00:31:58.610 "req_id": 1 00:31:58.610 } 00:31:58.610 Got JSON-RPC error response 00:31:58.610 response: 00:31:58.610 { 00:31:58.610 "code": -32602, 00:31:58.610 "message": "Invalid parameters" 00:31:58.610 } 00:31:58.610 11:20:17 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:58.610 11:20:17 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:31:58.610 11:20:17 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:58.610 11:20:17 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:58.610 11:20:17 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:58.610 11:20:17 keyring_file -- keyring/file.sh@46 -- # bperfpid=1652683 00:31:58.610 11:20:17 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1652683 /var/tmp/bperf.sock 00:31:58.610 11:20:17 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1652683 ']' 00:31:58.610 11:20:17 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:58.610 11:20:17 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:58.610 11:20:17 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:58.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:58.610 11:20:17 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:58.610 11:20:17 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:31:58.610 11:20:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:58.610 [2024-07-26 11:20:17.929575] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:58.610 [2024-07-26 11:20:17.929618] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1652683 ] 00:31:58.610 EAL: No free 2048 kB hugepages reported on node 1 00:31:58.610 [2024-07-26 11:20:17.980626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:58.610 [2024-07-26 11:20:18.059562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:59.544 11:20:18 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:59.544 11:20:18 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:31:59.544 11:20:18 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ZLAqsvJniM 00:31:59.544 11:20:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ZLAqsvJniM 00:31:59.544 11:20:18 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.dgL8vlo0be 00:31:59.544 11:20:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.dgL8vlo0be 00:31:59.802 11:20:19 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:31:59.802 11:20:19 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:31:59.802 11:20:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:59.802 11:20:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:59.802 11:20:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:59.802 11:20:19 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.ZLAqsvJniM == \/\t\m\p\/\t\m\p\.\Z\L\A\q\s\v\J\n\i\M ]] 00:31:59.802 11:20:19 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:31:59.802 11:20:19 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:31:59.802 11:20:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:59.802 11:20:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:59.802 11:20:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:00.060 11:20:19 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.dgL8vlo0be == \/\t\m\p\/\t\m\p\.\d\g\L\8\v\l\o\0\b\e ]] 00:32:00.060 11:20:19 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:32:00.060 11:20:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:00.060 11:20:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:00.060 11:20:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:00.060 11:20:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:00.060 11:20:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:00.318 11:20:19 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:32:00.318 11:20:19 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:32:00.318 11:20:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:00.318 11:20:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:00.318 11:20:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:00.318 11:20:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:00.318 11:20:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:00.318 11:20:19 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:32:00.318 11:20:19 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:00.318 11:20:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:00.578 [2024-07-26 11:20:19.941392] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:00.578 nvme0n1 00:32:00.578 11:20:20 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:32:00.578 11:20:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:00.578 11:20:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:00.578 11:20:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:00.578 11:20:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:00.578 11:20:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:00.837 11:20:20 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:32:00.837 11:20:20 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:32:00.837 11:20:20 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:00.837 11:20:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:00.837 11:20:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:00.837 11:20:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:00.837 11:20:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:01.145 11:20:20 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:32:01.145 11:20:20 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:01.145 Running I/O for 1 seconds... 00:32:02.079 00:32:02.079 Latency(us) 00:32:02.079 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:02.079 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:32:02.079 nvme0n1 : 1.03 2855.57 11.15 0.00 0.00 44335.67 11283.59 69753.10 00:32:02.079 =================================================================================================================== 00:32:02.079 Total : 2855.57 11.15 0.00 0.00 44335.67 11283.59 69753.10 00:32:02.079 0 00:32:02.079 11:20:21 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:02.079 11:20:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:02.338 11:20:21 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:32:02.338 11:20:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:02.338 11:20:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:02.338 11:20:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:02.338 11:20:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:02.338 11:20:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:02.597 11:20:21 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:32:02.597 11:20:21 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:32:02.597 11:20:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:02.597 11:20:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:02.597 11:20:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:02.597 11:20:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:02.597 11:20:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:02.855 11:20:22 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:32:02.855 11:20:22 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:02.855 11:20:22 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:32:02.855 11:20:22 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:02.855 11:20:22 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:32:02.855 11:20:22 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:02.855 11:20:22 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:32:02.855 11:20:22 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:02.855 11:20:22 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:02.855 11:20:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:02.855 [2024-07-26 11:20:22.264639] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:02.855 [2024-07-26 11:20:22.265170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f5820 (107): Transport endpoint is not connected 00:32:02.855 [2024-07-26 11:20:22.266163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f5820 (9): Bad file descriptor 00:32:02.855 [2024-07-26 11:20:22.267160] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:02.855 [2024-07-26 11:20:22.267169] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:02.855 [2024-07-26 11:20:22.267176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:02.855 request: 00:32:02.855 { 00:32:02.855 "name": "nvme0", 00:32:02.855 "trtype": "tcp", 00:32:02.855 "traddr": "127.0.0.1", 00:32:02.855 "adrfam": "ipv4", 00:32:02.855 "trsvcid": "4420", 00:32:02.855 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:02.855 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:02.855 "prchk_reftag": false, 00:32:02.855 "prchk_guard": false, 00:32:02.855 "hdgst": false, 00:32:02.855 "ddgst": false, 00:32:02.855 "psk": "key1", 00:32:02.855 "method": "bdev_nvme_attach_controller", 00:32:02.855 "req_id": 1 00:32:02.855 } 00:32:02.855 Got JSON-RPC error response 00:32:02.855 response: 00:32:02.855 { 00:32:02.855 "code": -5, 00:32:02.855 "message": "Input/output error" 00:32:02.855 } 00:32:02.855 11:20:22 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:32:02.855 11:20:22 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:02.855 11:20:22 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:02.855 11:20:22 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:02.855 11:20:22 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:32:02.855 11:20:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:02.855 11:20:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:02.855 11:20:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:02.855 11:20:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:02.855 11:20:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:03.112 11:20:22 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:32:03.112 11:20:22 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:32:03.112 11:20:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:03.112 11:20:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:03.112 11:20:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:03.112 11:20:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:03.112 11:20:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:03.369 11:20:22 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:32:03.370 11:20:22 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:32:03.370 11:20:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:03.370 11:20:22 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:32:03.370 11:20:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:32:03.628 11:20:22 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:32:03.628 11:20:22 keyring_file -- keyring/file.sh@77 -- # jq length 00:32:03.628 11:20:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:03.886 11:20:23 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:32:03.886 11:20:23 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.ZLAqsvJniM 00:32:03.886 11:20:23 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.ZLAqsvJniM 00:32:03.886 11:20:23 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:32:03.886 11:20:23 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.ZLAqsvJniM 00:32:03.886 11:20:23 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:32:03.886 11:20:23 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:03.886 11:20:23 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:32:03.886 11:20:23 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:03.886 11:20:23 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ZLAqsvJniM 00:32:03.886 11:20:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ZLAqsvJniM 00:32:03.886 [2024-07-26 11:20:23.336145] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ZLAqsvJniM': 0100660 00:32:03.886 [2024-07-26 11:20:23.336173] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:32:03.886 request: 00:32:03.886 { 00:32:03.886 "name": "key0", 00:32:03.886 "path": "/tmp/tmp.ZLAqsvJniM", 00:32:03.886 "method": "keyring_file_add_key", 00:32:03.887 "req_id": 1 00:32:03.887 } 00:32:03.887 Got JSON-RPC error response 00:32:03.887 response: 00:32:03.887 { 00:32:03.887 "code": -1, 00:32:03.887 "message": "Operation not permitted" 00:32:03.887 } 00:32:03.887 11:20:23 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:32:03.887 11:20:23 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:03.887 11:20:23 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:03.887 11:20:23 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:03.887 11:20:23 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.ZLAqsvJniM 00:32:03.887 11:20:23 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ZLAqsvJniM 00:32:03.887 11:20:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ZLAqsvJniM 00:32:04.148 11:20:23 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.ZLAqsvJniM 00:32:04.148 11:20:23 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:32:04.148 11:20:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:04.148 11:20:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:04.148 11:20:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:04.148 11:20:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:04.148 11:20:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:04.407 11:20:23 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:32:04.407 11:20:23 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:04.407 11:20:23 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:32:04.407 11:20:23 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:04.407 11:20:23 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:32:04.407 11:20:23 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:04.407 11:20:23 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:32:04.407 11:20:23 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:04.407 11:20:23 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:04.407 11:20:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:04.407 [2024-07-26 11:20:23.865549] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.ZLAqsvJniM': No such file or directory 00:32:04.407 [2024-07-26 11:20:23.865573] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:32:04.407 [2024-07-26 11:20:23.865592] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:32:04.407 [2024-07-26 11:20:23.865598] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:04.407 [2024-07-26 11:20:23.865604] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:32:04.407 request: 00:32:04.407 { 00:32:04.407 "name": "nvme0", 00:32:04.407 "trtype": "tcp", 00:32:04.407 "traddr": "127.0.0.1", 00:32:04.407 "adrfam": "ipv4", 00:32:04.407 "trsvcid": "4420", 00:32:04.407 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:04.407 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:04.407 "prchk_reftag": false, 00:32:04.407 "prchk_guard": false, 00:32:04.407 "hdgst": false, 00:32:04.407 "ddgst": false, 00:32:04.407 "psk": "key0", 00:32:04.407 "method": "bdev_nvme_attach_controller", 00:32:04.407 "req_id": 1 00:32:04.407 } 00:32:04.407 Got JSON-RPC error response 00:32:04.407 response: 00:32:04.407 { 00:32:04.407 "code": -19, 00:32:04.407 "message": "No such device" 00:32:04.407 } 00:32:04.407 11:20:23 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:32:04.407 11:20:23 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:04.407 11:20:23 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:04.407 11:20:23 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:04.407 11:20:23 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:32:04.407 11:20:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:04.665 11:20:24 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:04.665 11:20:24 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:04.665 11:20:24 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:04.665 11:20:24 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:04.665 11:20:24 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:04.665 11:20:24 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:04.665 11:20:24 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.IwngBfnmEq 00:32:04.665 11:20:24 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:04.665 11:20:24 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:04.665 11:20:24 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:04.665 11:20:24 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:04.665 11:20:24 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:04.665 11:20:24 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:04.665 11:20:24 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:04.665 11:20:24 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.IwngBfnmEq 00:32:04.665 11:20:24 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.IwngBfnmEq 00:32:04.665 11:20:24 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.IwngBfnmEq 00:32:04.665 11:20:24 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.IwngBfnmEq 00:32:04.665 11:20:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.IwngBfnmEq 00:32:04.923 11:20:24 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:04.923 11:20:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:05.181 nvme0n1 00:32:05.181 11:20:24 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:32:05.181 11:20:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:05.181 11:20:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:05.181 11:20:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:05.181 11:20:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:05.181 11:20:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:05.440 11:20:24 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:32:05.440 11:20:24 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:32:05.440 11:20:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:05.440 11:20:24 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:32:05.440 11:20:24 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:32:05.440 11:20:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:05.440 11:20:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:05.440 11:20:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:05.698 11:20:25 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:32:05.698 11:20:25 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:32:05.698 11:20:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:05.698 11:20:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:05.698 11:20:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:05.698 11:20:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:05.698 11:20:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:05.957 11:20:25 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:32:05.957 11:20:25 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:05.957 11:20:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:05.957 11:20:25 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:32:05.957 11:20:25 keyring_file -- keyring/file.sh@104 -- # jq length 00:32:05.957 11:20:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:06.225 11:20:25 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:32:06.226 11:20:25 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.IwngBfnmEq 00:32:06.226 11:20:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.IwngBfnmEq 00:32:06.490 11:20:25 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.dgL8vlo0be 00:32:06.490 11:20:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.dgL8vlo0be 00:32:06.490 11:20:25 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:06.490 11:20:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:06.748 nvme0n1 00:32:06.748 11:20:26 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:32:06.748 11:20:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:32:07.007 11:20:26 keyring_file -- keyring/file.sh@112 -- # config='{ 00:32:07.007 "subsystems": [ 00:32:07.007 { 00:32:07.007 "subsystem": "keyring", 00:32:07.007 "config": [ 00:32:07.007 { 00:32:07.007 "method": "keyring_file_add_key", 00:32:07.007 "params": { 00:32:07.007 "name": "key0", 00:32:07.007 "path": "/tmp/tmp.IwngBfnmEq" 00:32:07.007 } 00:32:07.007 }, 00:32:07.007 { 00:32:07.007 "method": "keyring_file_add_key", 00:32:07.007 "params": { 00:32:07.007 "name": "key1", 00:32:07.007 "path": "/tmp/tmp.dgL8vlo0be" 00:32:07.007 } 00:32:07.007 } 00:32:07.007 ] 00:32:07.007 }, 00:32:07.007 { 00:32:07.007 "subsystem": "iobuf", 00:32:07.007 "config": [ 00:32:07.007 { 00:32:07.007 "method": "iobuf_set_options", 00:32:07.007 "params": { 00:32:07.007 "small_pool_count": 8192, 00:32:07.007 "large_pool_count": 1024, 00:32:07.007 "small_bufsize": 8192, 00:32:07.007 "large_bufsize": 135168 00:32:07.007 } 00:32:07.007 } 00:32:07.007 ] 00:32:07.007 }, 00:32:07.007 { 00:32:07.007 "subsystem": "sock", 00:32:07.007 "config": [ 00:32:07.007 { 00:32:07.007 "method": "sock_set_default_impl", 00:32:07.007 "params": { 00:32:07.007 "impl_name": "posix" 00:32:07.007 } 00:32:07.007 }, 00:32:07.007 { 00:32:07.007 "method": "sock_impl_set_options", 00:32:07.007 "params": { 00:32:07.007 "impl_name": "ssl", 00:32:07.007 "recv_buf_size": 4096, 00:32:07.007 "send_buf_size": 4096, 00:32:07.007 "enable_recv_pipe": true, 00:32:07.007 "enable_quickack": false, 00:32:07.007 "enable_placement_id": 0, 00:32:07.007 "enable_zerocopy_send_server": true, 00:32:07.007 "enable_zerocopy_send_client": false, 00:32:07.007 "zerocopy_threshold": 0, 00:32:07.007 "tls_version": 0, 00:32:07.007 "enable_ktls": false 00:32:07.007 } 00:32:07.007 }, 00:32:07.007 { 00:32:07.007 "method": "sock_impl_set_options", 00:32:07.007 "params": { 00:32:07.007 "impl_name": "posix", 00:32:07.007 "recv_buf_size": 2097152, 00:32:07.007 "send_buf_size": 2097152, 00:32:07.007 "enable_recv_pipe": true, 00:32:07.007 "enable_quickack": false, 00:32:07.007 "enable_placement_id": 0, 00:32:07.007 "enable_zerocopy_send_server": true, 00:32:07.007 "enable_zerocopy_send_client": false, 00:32:07.007 "zerocopy_threshold": 0, 00:32:07.007 "tls_version": 0, 00:32:07.007 "enable_ktls": false 00:32:07.007 } 00:32:07.007 } 00:32:07.007 ] 00:32:07.007 }, 00:32:07.007 { 00:32:07.007 "subsystem": "vmd", 00:32:07.007 "config": [] 00:32:07.007 }, 00:32:07.007 { 00:32:07.007 "subsystem": "accel", 00:32:07.007 "config": [ 00:32:07.007 { 00:32:07.007 "method": "accel_set_options", 00:32:07.007 "params": { 00:32:07.007 "small_cache_size": 128, 00:32:07.007 "large_cache_size": 16, 00:32:07.007 "task_count": 2048, 00:32:07.007 "sequence_count": 2048, 00:32:07.007 "buf_count": 2048 00:32:07.007 } 00:32:07.007 } 00:32:07.007 ] 00:32:07.007 }, 00:32:07.007 { 00:32:07.007 "subsystem": "bdev", 00:32:07.007 "config": [ 00:32:07.007 { 00:32:07.007 "method": "bdev_set_options", 00:32:07.007 "params": { 00:32:07.007 "bdev_io_pool_size": 65535, 00:32:07.007 "bdev_io_cache_size": 256, 00:32:07.007 "bdev_auto_examine": true, 00:32:07.007 "iobuf_small_cache_size": 128, 00:32:07.007 "iobuf_large_cache_size": 16 00:32:07.007 } 00:32:07.007 }, 00:32:07.007 { 00:32:07.007 "method": "bdev_raid_set_options", 00:32:07.007 "params": { 00:32:07.007 "process_window_size_kb": 1024, 00:32:07.007 "process_max_bandwidth_mb_sec": 0 00:32:07.007 } 00:32:07.007 }, 00:32:07.007 { 00:32:07.007 "method": "bdev_iscsi_set_options", 00:32:07.007 "params": { 00:32:07.007 "timeout_sec": 30 00:32:07.007 } 00:32:07.008 }, 00:32:07.008 { 00:32:07.008 "method": "bdev_nvme_set_options", 00:32:07.008 "params": { 00:32:07.008 "action_on_timeout": "none", 00:32:07.008 "timeout_us": 0, 00:32:07.008 "timeout_admin_us": 0, 00:32:07.008 "keep_alive_timeout_ms": 10000, 00:32:07.008 "arbitration_burst": 0, 00:32:07.008 "low_priority_weight": 0, 00:32:07.008 "medium_priority_weight": 0, 00:32:07.008 "high_priority_weight": 0, 00:32:07.008 "nvme_adminq_poll_period_us": 10000, 00:32:07.008 "nvme_ioq_poll_period_us": 0, 00:32:07.008 "io_queue_requests": 512, 00:32:07.008 "delay_cmd_submit": true, 00:32:07.008 "transport_retry_count": 4, 00:32:07.008 "bdev_retry_count": 3, 00:32:07.008 "transport_ack_timeout": 0, 00:32:07.008 "ctrlr_loss_timeout_sec": 0, 00:32:07.008 "reconnect_delay_sec": 0, 00:32:07.008 "fast_io_fail_timeout_sec": 0, 00:32:07.008 "disable_auto_failback": false, 00:32:07.008 "generate_uuids": false, 00:32:07.008 "transport_tos": 0, 00:32:07.008 "nvme_error_stat": false, 00:32:07.008 "rdma_srq_size": 0, 00:32:07.008 "io_path_stat": false, 00:32:07.008 "allow_accel_sequence": false, 00:32:07.008 "rdma_max_cq_size": 0, 00:32:07.008 "rdma_cm_event_timeout_ms": 0, 00:32:07.008 "dhchap_digests": [ 00:32:07.008 "sha256", 00:32:07.008 "sha384", 00:32:07.008 "sha512" 00:32:07.008 ], 00:32:07.008 "dhchap_dhgroups": [ 00:32:07.008 "null", 00:32:07.008 "ffdhe2048", 00:32:07.008 "ffdhe3072", 00:32:07.008 "ffdhe4096", 00:32:07.008 "ffdhe6144", 00:32:07.008 "ffdhe8192" 00:32:07.008 ] 00:32:07.008 } 00:32:07.008 }, 00:32:07.008 { 00:32:07.008 "method": "bdev_nvme_attach_controller", 00:32:07.008 "params": { 00:32:07.008 "name": "nvme0", 00:32:07.008 "trtype": "TCP", 00:32:07.008 "adrfam": "IPv4", 00:32:07.008 "traddr": "127.0.0.1", 00:32:07.008 "trsvcid": "4420", 00:32:07.008 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:07.008 "prchk_reftag": false, 00:32:07.008 "prchk_guard": false, 00:32:07.008 "ctrlr_loss_timeout_sec": 0, 00:32:07.008 "reconnect_delay_sec": 0, 00:32:07.008 "fast_io_fail_timeout_sec": 0, 00:32:07.008 "psk": "key0", 00:32:07.008 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:07.008 "hdgst": false, 00:32:07.008 "ddgst": false 00:32:07.008 } 00:32:07.008 }, 00:32:07.008 { 00:32:07.008 "method": "bdev_nvme_set_hotplug", 00:32:07.008 "params": { 00:32:07.008 "period_us": 100000, 00:32:07.008 "enable": false 00:32:07.008 } 00:32:07.008 }, 00:32:07.008 { 00:32:07.008 "method": "bdev_wait_for_examine" 00:32:07.008 } 00:32:07.008 ] 00:32:07.008 }, 00:32:07.008 { 00:32:07.008 "subsystem": "nbd", 00:32:07.008 "config": [] 00:32:07.008 } 00:32:07.008 ] 00:32:07.008 }' 00:32:07.008 11:20:26 keyring_file -- keyring/file.sh@114 -- # killprocess 1652683 00:32:07.008 11:20:26 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1652683 ']' 00:32:07.008 11:20:26 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1652683 00:32:07.008 11:20:26 keyring_file -- common/autotest_common.sh@955 -- # uname 00:32:07.008 11:20:26 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:07.008 11:20:26 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1652683 00:32:07.008 11:20:26 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:07.008 11:20:26 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:07.008 11:20:26 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1652683' 00:32:07.008 killing process with pid 1652683 00:32:07.008 11:20:26 keyring_file -- common/autotest_common.sh@969 -- # kill 1652683 00:32:07.008 Received shutdown signal, test time was about 1.000000 seconds 00:32:07.008 00:32:07.008 Latency(us) 00:32:07.008 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:07.008 =================================================================================================================== 00:32:07.008 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:07.008 11:20:26 keyring_file -- common/autotest_common.sh@974 -- # wait 1652683 00:32:07.269 11:20:26 keyring_file -- keyring/file.sh@117 -- # bperfpid=1654206 00:32:07.269 11:20:26 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1654206 /var/tmp/bperf.sock 00:32:07.269 11:20:26 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1654206 ']' 00:32:07.269 11:20:26 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:07.269 11:20:26 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:07.269 11:20:26 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:07.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:07.269 11:20:26 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:07.269 11:20:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:07.269 11:20:26 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:32:07.269 11:20:26 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:32:07.269 "subsystems": [ 00:32:07.269 { 00:32:07.269 "subsystem": "keyring", 00:32:07.269 "config": [ 00:32:07.269 { 00:32:07.269 "method": "keyring_file_add_key", 00:32:07.269 "params": { 00:32:07.269 "name": "key0", 00:32:07.269 "path": "/tmp/tmp.IwngBfnmEq" 00:32:07.269 } 00:32:07.269 }, 00:32:07.269 { 00:32:07.269 "method": "keyring_file_add_key", 00:32:07.269 "params": { 00:32:07.269 "name": "key1", 00:32:07.269 "path": "/tmp/tmp.dgL8vlo0be" 00:32:07.269 } 00:32:07.269 } 00:32:07.269 ] 00:32:07.269 }, 00:32:07.269 { 00:32:07.269 "subsystem": "iobuf", 00:32:07.269 "config": [ 00:32:07.269 { 00:32:07.269 "method": "iobuf_set_options", 00:32:07.269 "params": { 00:32:07.269 "small_pool_count": 8192, 00:32:07.269 "large_pool_count": 1024, 00:32:07.269 "small_bufsize": 8192, 00:32:07.269 "large_bufsize": 135168 00:32:07.269 } 00:32:07.269 } 00:32:07.269 ] 00:32:07.269 }, 00:32:07.269 { 00:32:07.269 "subsystem": "sock", 00:32:07.269 "config": [ 00:32:07.269 { 00:32:07.269 "method": "sock_set_default_impl", 00:32:07.269 "params": { 00:32:07.269 "impl_name": "posix" 00:32:07.269 } 00:32:07.269 }, 00:32:07.269 { 00:32:07.269 "method": "sock_impl_set_options", 00:32:07.269 "params": { 00:32:07.269 "impl_name": "ssl", 00:32:07.269 "recv_buf_size": 4096, 00:32:07.269 "send_buf_size": 4096, 00:32:07.269 "enable_recv_pipe": true, 00:32:07.269 "enable_quickack": false, 00:32:07.269 "enable_placement_id": 0, 00:32:07.269 "enable_zerocopy_send_server": true, 00:32:07.269 "enable_zerocopy_send_client": false, 00:32:07.269 "zerocopy_threshold": 0, 00:32:07.270 "tls_version": 0, 00:32:07.270 "enable_ktls": false 00:32:07.270 } 00:32:07.270 }, 00:32:07.270 { 00:32:07.270 "method": "sock_impl_set_options", 00:32:07.270 "params": { 00:32:07.270 "impl_name": "posix", 00:32:07.270 "recv_buf_size": 2097152, 00:32:07.270 "send_buf_size": 2097152, 00:32:07.270 "enable_recv_pipe": true, 00:32:07.270 "enable_quickack": false, 00:32:07.270 "enable_placement_id": 0, 00:32:07.270 "enable_zerocopy_send_server": true, 00:32:07.270 "enable_zerocopy_send_client": false, 00:32:07.270 "zerocopy_threshold": 0, 00:32:07.270 "tls_version": 0, 00:32:07.270 "enable_ktls": false 00:32:07.270 } 00:32:07.270 } 00:32:07.270 ] 00:32:07.270 }, 00:32:07.270 { 00:32:07.270 "subsystem": "vmd", 00:32:07.270 "config": [] 00:32:07.270 }, 00:32:07.270 { 00:32:07.270 "subsystem": "accel", 00:32:07.270 "config": [ 00:32:07.270 { 00:32:07.270 "method": "accel_set_options", 00:32:07.270 "params": { 00:32:07.270 "small_cache_size": 128, 00:32:07.270 "large_cache_size": 16, 00:32:07.270 "task_count": 2048, 00:32:07.270 "sequence_count": 2048, 00:32:07.270 "buf_count": 2048 00:32:07.270 } 00:32:07.270 } 00:32:07.270 ] 00:32:07.270 }, 00:32:07.270 { 00:32:07.270 "subsystem": "bdev", 00:32:07.270 "config": [ 00:32:07.270 { 00:32:07.270 "method": "bdev_set_options", 00:32:07.270 "params": { 00:32:07.270 "bdev_io_pool_size": 65535, 00:32:07.270 "bdev_io_cache_size": 256, 00:32:07.270 "bdev_auto_examine": true, 00:32:07.270 "iobuf_small_cache_size": 128, 00:32:07.270 "iobuf_large_cache_size": 16 00:32:07.270 } 00:32:07.270 }, 00:32:07.270 { 00:32:07.270 "method": "bdev_raid_set_options", 00:32:07.270 "params": { 00:32:07.270 "process_window_size_kb": 1024, 00:32:07.270 "process_max_bandwidth_mb_sec": 0 00:32:07.270 } 00:32:07.270 }, 00:32:07.270 { 00:32:07.270 "method": "bdev_iscsi_set_options", 00:32:07.270 "params": { 00:32:07.270 "timeout_sec": 30 00:32:07.270 } 00:32:07.270 }, 00:32:07.270 { 00:32:07.270 "method": "bdev_nvme_set_options", 00:32:07.270 "params": { 00:32:07.270 "action_on_timeout": "none", 00:32:07.270 "timeout_us": 0, 00:32:07.270 "timeout_admin_us": 0, 00:32:07.270 "keep_alive_timeout_ms": 10000, 00:32:07.270 "arbitration_burst": 0, 00:32:07.270 "low_priority_weight": 0, 00:32:07.270 "medium_priority_weight": 0, 00:32:07.270 "high_priority_weight": 0, 00:32:07.270 "nvme_adminq_poll_period_us": 10000, 00:32:07.270 "nvme_ioq_poll_period_us": 0, 00:32:07.270 "io_queue_requests": 512, 00:32:07.270 "delay_cmd_submit": true, 00:32:07.270 "transport_retry_count": 4, 00:32:07.270 "bdev_retry_count": 3, 00:32:07.270 "transport_ack_timeout": 0, 00:32:07.270 "ctrlr_loss_timeout_sec": 0, 00:32:07.270 "reconnect_delay_sec": 0, 00:32:07.270 "fast_io_fail_timeout_sec": 0, 00:32:07.270 "disable_auto_failback": false, 00:32:07.270 "generate_uuids": false, 00:32:07.270 "transport_tos": 0, 00:32:07.270 "nvme_error_stat": false, 00:32:07.270 "rdma_srq_size": 0, 00:32:07.270 "io_path_stat": false, 00:32:07.270 "allow_accel_sequence": false, 00:32:07.270 "rdma_max_cq_size": 0, 00:32:07.270 "rdma_cm_event_timeout_ms": 0, 00:32:07.270 "dhchap_digests": [ 00:32:07.270 "sha256", 00:32:07.270 "sha384", 00:32:07.270 "sha512" 00:32:07.270 ], 00:32:07.270 "dhchap_dhgroups": [ 00:32:07.270 "null", 00:32:07.270 "ffdhe2048", 00:32:07.270 "ffdhe3072", 00:32:07.270 "ffdhe4096", 00:32:07.270 "ffdhe6144", 00:32:07.270 "ffdhe8192" 00:32:07.270 ] 00:32:07.270 } 00:32:07.270 }, 00:32:07.270 { 00:32:07.270 "method": "bdev_nvme_attach_controller", 00:32:07.270 "params": { 00:32:07.270 "name": "nvme0", 00:32:07.270 "trtype": "TCP", 00:32:07.270 "adrfam": "IPv4", 00:32:07.270 "traddr": "127.0.0.1", 00:32:07.270 "trsvcid": "4420", 00:32:07.270 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:07.270 "prchk_reftag": false, 00:32:07.270 "prchk_guard": false, 00:32:07.270 "ctrlr_loss_timeout_sec": 0, 00:32:07.270 "reconnect_delay_sec": 0, 00:32:07.270 "fast_io_fail_timeout_sec": 0, 00:32:07.270 "psk": "key0", 00:32:07.270 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:07.270 "hdgst": false, 00:32:07.270 "ddgst": false 00:32:07.270 } 00:32:07.270 }, 00:32:07.270 { 00:32:07.270 "method": "bdev_nvme_set_hotplug", 00:32:07.270 "params": { 00:32:07.270 "period_us": 100000, 00:32:07.270 "enable": false 00:32:07.270 } 00:32:07.270 }, 00:32:07.270 { 00:32:07.270 "method": "bdev_wait_for_examine" 00:32:07.270 } 00:32:07.270 ] 00:32:07.270 }, 00:32:07.270 { 00:32:07.270 "subsystem": "nbd", 00:32:07.270 "config": [] 00:32:07.270 } 00:32:07.270 ] 00:32:07.270 }' 00:32:07.270 [2024-07-26 11:20:26.709649] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:32:07.270 [2024-07-26 11:20:26.709696] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1654206 ] 00:32:07.270 EAL: No free 2048 kB hugepages reported on node 1 00:32:07.270 [2024-07-26 11:20:26.762637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:07.531 [2024-07-26 11:20:26.836864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:07.531 [2024-07-26 11:20:26.995014] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:08.097 11:20:27 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:08.097 11:20:27 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:32:08.097 11:20:27 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:32:08.097 11:20:27 keyring_file -- keyring/file.sh@120 -- # jq length 00:32:08.097 11:20:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:08.355 11:20:27 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:32:08.355 11:20:27 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:32:08.355 11:20:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:08.355 11:20:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:08.355 11:20:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:08.355 11:20:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:08.355 11:20:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:08.355 11:20:27 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:32:08.355 11:20:27 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:32:08.613 11:20:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:08.613 11:20:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:08.613 11:20:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:08.613 11:20:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:08.613 11:20:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:08.613 11:20:28 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:32:08.613 11:20:28 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:32:08.613 11:20:28 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:32:08.613 11:20:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:32:08.873 11:20:28 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:32:08.873 11:20:28 keyring_file -- keyring/file.sh@1 -- # cleanup 00:32:08.873 11:20:28 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.IwngBfnmEq /tmp/tmp.dgL8vlo0be 00:32:08.873 11:20:28 keyring_file -- keyring/file.sh@20 -- # killprocess 1654206 00:32:08.873 11:20:28 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1654206 ']' 00:32:08.873 11:20:28 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1654206 00:32:08.873 11:20:28 keyring_file -- common/autotest_common.sh@955 -- # uname 00:32:08.873 11:20:28 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:08.873 11:20:28 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1654206 00:32:08.873 11:20:28 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:08.873 11:20:28 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:08.873 11:20:28 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1654206' 00:32:08.873 killing process with pid 1654206 00:32:08.873 11:20:28 keyring_file -- common/autotest_common.sh@969 -- # kill 1654206 00:32:08.873 Received shutdown signal, test time was about 1.000000 seconds 00:32:08.873 00:32:08.873 Latency(us) 00:32:08.873 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:08.873 =================================================================================================================== 00:32:08.873 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:08.873 11:20:28 keyring_file -- common/autotest_common.sh@974 -- # wait 1654206 00:32:09.132 11:20:28 keyring_file -- keyring/file.sh@21 -- # killprocess 1652457 00:32:09.132 11:20:28 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1652457 ']' 00:32:09.132 11:20:28 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1652457 00:32:09.132 11:20:28 keyring_file -- common/autotest_common.sh@955 -- # uname 00:32:09.132 11:20:28 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:09.132 11:20:28 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1652457 00:32:09.132 11:20:28 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:09.132 11:20:28 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:09.132 11:20:28 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1652457' 00:32:09.132 killing process with pid 1652457 00:32:09.132 11:20:28 keyring_file -- common/autotest_common.sh@969 -- # kill 1652457 00:32:09.132 [2024-07-26 11:20:28.465030] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:32:09.132 11:20:28 keyring_file -- common/autotest_common.sh@974 -- # wait 1652457 00:32:09.392 00:32:09.392 real 0m12.025s 00:32:09.392 user 0m28.082s 00:32:09.392 sys 0m2.627s 00:32:09.392 11:20:28 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:09.392 11:20:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:09.392 ************************************ 00:32:09.392 END TEST keyring_file 00:32:09.392 ************************************ 00:32:09.392 11:20:28 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:32:09.392 11:20:28 -- spdk/autotest.sh@301 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:09.392 11:20:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:09.392 11:20:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:09.392 11:20:28 -- common/autotest_common.sh@10 -- # set +x 00:32:09.392 ************************************ 00:32:09.392 START TEST keyring_linux 00:32:09.392 ************************************ 00:32:09.392 11:20:28 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:09.651 * Looking for test storage... 00:32:09.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:09.651 11:20:28 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:09.651 11:20:28 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:09.651 11:20:28 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:32:09.651 11:20:28 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:09.651 11:20:28 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:09.651 11:20:28 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:09.651 11:20:28 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:09.651 11:20:28 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:09.651 11:20:28 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:09.651 11:20:28 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:09.651 11:20:28 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:09.651 11:20:28 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:09.651 11:20:28 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:09.651 11:20:28 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:09.651 11:20:28 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:09.651 11:20:28 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:09.651 11:20:28 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:09.651 11:20:28 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:09.651 11:20:28 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:09.651 11:20:28 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:09.651 11:20:28 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:09.651 11:20:28 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:09.651 11:20:28 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:09.651 11:20:28 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.651 11:20:28 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.651 11:20:28 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.651 11:20:28 keyring_linux -- paths/export.sh@5 -- # export PATH 00:32:09.651 11:20:28 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.652 11:20:28 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:32:09.652 11:20:28 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:09.652 11:20:28 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:09.652 11:20:28 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:09.652 11:20:28 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:09.652 11:20:28 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:09.652 11:20:28 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:09.652 11:20:28 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:09.652 11:20:28 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:09.652 11:20:28 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:09.652 11:20:28 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:09.652 11:20:28 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:09.652 11:20:28 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:32:09.652 11:20:28 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:32:09.652 11:20:28 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:32:09.652 11:20:28 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:32:09.652 11:20:28 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:09.652 11:20:28 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:32:09.652 11:20:28 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:09.652 11:20:28 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:09.652 11:20:28 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:32:09.652 11:20:28 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:09.652 11:20:28 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:09.652 11:20:28 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:09.652 11:20:28 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:09.652 11:20:28 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:09.652 11:20:28 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:09.652 11:20:28 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:09.652 11:20:28 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:32:09.652 11:20:28 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:32:09.652 /tmp/:spdk-test:key0 00:32:09.652 11:20:29 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:32:09.652 11:20:29 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:09.652 11:20:29 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:32:09.652 11:20:29 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:09.652 11:20:29 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:09.652 11:20:29 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:32:09.652 11:20:29 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:09.652 11:20:29 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:09.652 11:20:29 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:09.652 11:20:29 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:09.652 11:20:29 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:09.652 11:20:29 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:09.652 11:20:29 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:09.652 11:20:29 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:32:09.652 11:20:29 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:32:09.652 /tmp/:spdk-test:key1 00:32:09.652 11:20:29 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1654746 00:32:09.652 11:20:29 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:09.652 11:20:29 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1654746 00:32:09.652 11:20:29 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1654746 ']' 00:32:09.652 11:20:29 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:09.652 11:20:29 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:09.652 11:20:29 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:09.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:09.652 11:20:29 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:09.652 11:20:29 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:09.652 [2024-07-26 11:20:29.093662] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:32:09.652 [2024-07-26 11:20:29.093708] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1654746 ] 00:32:09.652 EAL: No free 2048 kB hugepages reported on node 1 00:32:09.652 [2024-07-26 11:20:29.144943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:09.911 [2024-07-26 11:20:29.224601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:10.479 11:20:29 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:10.479 11:20:29 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:32:10.479 11:20:29 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:32:10.479 11:20:29 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:10.479 11:20:29 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:10.479 [2024-07-26 11:20:29.896119] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:10.479 null0 00:32:10.479 [2024-07-26 11:20:29.928177] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:10.479 [2024-07-26 11:20:29.928514] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:10.479 11:20:29 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.479 11:20:29 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:32:10.479 566471862 00:32:10.479 11:20:29 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:32:10.479 408075308 00:32:10.479 11:20:29 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1654766 00:32:10.479 11:20:29 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1654766 /var/tmp/bperf.sock 00:32:10.479 11:20:29 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:32:10.479 11:20:29 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1654766 ']' 00:32:10.479 11:20:29 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:10.479 11:20:29 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:10.479 11:20:29 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:10.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:10.479 11:20:29 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:10.479 11:20:29 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:10.737 [2024-07-26 11:20:29.997843] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:32:10.737 [2024-07-26 11:20:29.997886] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1654766 ] 00:32:10.737 EAL: No free 2048 kB hugepages reported on node 1 00:32:10.737 [2024-07-26 11:20:30.056013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:10.737 [2024-07-26 11:20:30.137097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:11.673 11:20:30 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:11.673 11:20:30 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:32:11.673 11:20:30 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:32:11.673 11:20:30 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:32:11.673 11:20:30 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:32:11.673 11:20:30 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:11.932 11:20:31 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:11.932 11:20:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:11.932 [2024-07-26 11:20:31.364853] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:12.191 nvme0n1 00:32:12.191 11:20:31 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:32:12.191 11:20:31 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:32:12.191 11:20:31 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:12.192 11:20:31 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:12.192 11:20:31 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:12.192 11:20:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:12.192 11:20:31 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:32:12.192 11:20:31 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:12.192 11:20:31 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:32:12.192 11:20:31 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:12.192 11:20:31 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:32:12.192 11:20:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:12.192 11:20:31 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:32:12.452 11:20:31 keyring_linux -- keyring/linux.sh@25 -- # sn=566471862 00:32:12.452 11:20:31 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:32:12.452 11:20:31 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:12.452 11:20:31 keyring_linux -- keyring/linux.sh@26 -- # [[ 566471862 == \5\6\6\4\7\1\8\6\2 ]] 00:32:12.452 11:20:31 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 566471862 00:32:12.452 11:20:31 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:32:12.452 11:20:31 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:12.452 Running I/O for 1 seconds... 00:32:13.830 00:32:13.830 Latency(us) 00:32:13.830 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:13.830 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:13.830 nvme0n1 : 1.04 2410.78 9.42 0.00 0.00 52148.81 9459.98 64738.17 00:32:13.830 =================================================================================================================== 00:32:13.830 Total : 2410.78 9.42 0.00 0.00 52148.81 9459.98 64738.17 00:32:13.830 0 00:32:13.830 11:20:32 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:13.830 11:20:32 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:13.830 11:20:33 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:32:13.830 11:20:33 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:32:13.830 11:20:33 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:13.830 11:20:33 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:13.830 11:20:33 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:13.830 11:20:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:13.830 11:20:33 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:32:13.830 11:20:33 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:13.830 11:20:33 keyring_linux -- keyring/linux.sh@23 -- # return 00:32:13.830 11:20:33 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:13.830 11:20:33 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:32:13.830 11:20:33 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:13.830 11:20:33 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:32:14.088 11:20:33 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:14.088 11:20:33 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:32:14.088 11:20:33 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:14.088 11:20:33 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:14.088 11:20:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:14.088 [2024-07-26 11:20:33.487780] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:14.088 [2024-07-26 11:20:33.487974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20fe770 (107): Transport endpoint is not connected 00:32:14.088 [2024-07-26 11:20:33.488970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20fe770 (9): Bad file descriptor 00:32:14.088 [2024-07-26 11:20:33.489970] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:14.088 [2024-07-26 11:20:33.489978] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:14.088 [2024-07-26 11:20:33.489986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:14.088 request: 00:32:14.088 { 00:32:14.088 "name": "nvme0", 00:32:14.088 "trtype": "tcp", 00:32:14.088 "traddr": "127.0.0.1", 00:32:14.088 "adrfam": "ipv4", 00:32:14.088 "trsvcid": "4420", 00:32:14.088 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:14.088 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:14.088 "prchk_reftag": false, 00:32:14.088 "prchk_guard": false, 00:32:14.088 "hdgst": false, 00:32:14.088 "ddgst": false, 00:32:14.088 "psk": ":spdk-test:key1", 00:32:14.088 "method": "bdev_nvme_attach_controller", 00:32:14.088 "req_id": 1 00:32:14.088 } 00:32:14.088 Got JSON-RPC error response 00:32:14.088 response: 00:32:14.088 { 00:32:14.088 "code": -5, 00:32:14.088 "message": "Input/output error" 00:32:14.088 } 00:32:14.088 11:20:33 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:32:14.088 11:20:33 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:14.088 11:20:33 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:14.088 11:20:33 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:14.088 11:20:33 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:32:14.088 11:20:33 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:14.088 11:20:33 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:32:14.088 11:20:33 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:32:14.088 11:20:33 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:32:14.088 11:20:33 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:14.088 11:20:33 keyring_linux -- keyring/linux.sh@33 -- # sn=566471862 00:32:14.088 11:20:33 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 566471862 00:32:14.088 1 links removed 00:32:14.088 11:20:33 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:14.088 11:20:33 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:32:14.088 11:20:33 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:32:14.088 11:20:33 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:32:14.088 11:20:33 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:32:14.088 11:20:33 keyring_linux -- keyring/linux.sh@33 -- # sn=408075308 00:32:14.088 11:20:33 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 408075308 00:32:14.088 1 links removed 00:32:14.088 11:20:33 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1654766 00:32:14.088 11:20:33 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1654766 ']' 00:32:14.088 11:20:33 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1654766 00:32:14.088 11:20:33 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:32:14.088 11:20:33 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:14.088 11:20:33 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1654766 00:32:14.088 11:20:33 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:14.088 11:20:33 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:14.088 11:20:33 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1654766' 00:32:14.088 killing process with pid 1654766 00:32:14.088 11:20:33 keyring_linux -- common/autotest_common.sh@969 -- # kill 1654766 00:32:14.088 Received shutdown signal, test time was about 1.000000 seconds 00:32:14.088 00:32:14.088 Latency(us) 00:32:14.088 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:14.088 =================================================================================================================== 00:32:14.088 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:14.088 11:20:33 keyring_linux -- common/autotest_common.sh@974 -- # wait 1654766 00:32:14.346 11:20:33 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1654746 00:32:14.346 11:20:33 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1654746 ']' 00:32:14.347 11:20:33 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1654746 00:32:14.347 11:20:33 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:32:14.347 11:20:33 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:14.347 11:20:33 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1654746 00:32:14.347 11:20:33 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:14.347 11:20:33 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:14.347 11:20:33 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1654746' 00:32:14.347 killing process with pid 1654746 00:32:14.347 11:20:33 keyring_linux -- common/autotest_common.sh@969 -- # kill 1654746 00:32:14.347 11:20:33 keyring_linux -- common/autotest_common.sh@974 -- # wait 1654746 00:32:14.915 00:32:14.915 real 0m5.262s 00:32:14.915 user 0m9.250s 00:32:14.915 sys 0m1.149s 00:32:14.915 11:20:34 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:14.915 11:20:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:14.915 ************************************ 00:32:14.915 END TEST keyring_linux 00:32:14.915 ************************************ 00:32:14.915 11:20:34 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:32:14.916 11:20:34 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:32:14.916 11:20:34 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:32:14.916 11:20:34 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:32:14.916 11:20:34 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:32:14.916 11:20:34 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:32:14.916 11:20:34 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:32:14.916 11:20:34 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:32:14.916 11:20:34 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:32:14.916 11:20:34 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:32:14.916 11:20:34 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:32:14.916 11:20:34 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:32:14.916 11:20:34 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:32:14.916 11:20:34 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:32:14.916 11:20:34 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:32:14.916 11:20:34 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:32:14.916 11:20:34 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:32:14.916 11:20:34 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:14.916 11:20:34 -- common/autotest_common.sh@10 -- # set +x 00:32:14.916 11:20:34 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:32:14.916 11:20:34 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:32:14.916 11:20:34 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:32:14.916 11:20:34 -- common/autotest_common.sh@10 -- # set +x 00:32:20.191 INFO: APP EXITING 00:32:20.191 INFO: killing all VMs 00:32:20.191 INFO: killing vhost app 00:32:20.191 INFO: EXIT DONE 00:32:22.145 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:32:22.145 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:32:22.145 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:32:22.145 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:32:22.145 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:32:22.145 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:32:22.145 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:32:22.145 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:32:22.145 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:32:22.145 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:32:22.145 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:32:22.145 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:32:22.145 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:32:22.145 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:32:22.145 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:32:22.145 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:32:22.145 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:32:24.680 Cleaning 00:32:24.680 Removing: /var/run/dpdk/spdk0/config 00:32:24.680 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:24.680 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:24.680 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:24.680 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:24.680 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:24.680 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:24.680 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:24.680 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:24.680 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:24.680 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:24.680 Removing: /var/run/dpdk/spdk1/config 00:32:24.680 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:24.680 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:24.680 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:24.680 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:24.680 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:32:24.680 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:32:24.680 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:32:24.680 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:32:24.680 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:24.680 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:24.680 Removing: /var/run/dpdk/spdk1/mp_socket 00:32:24.680 Removing: /var/run/dpdk/spdk2/config 00:32:24.680 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:24.680 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:24.680 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:24.680 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:24.680 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:32:24.680 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:32:24.680 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:32:24.680 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:32:24.680 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:24.680 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:24.680 Removing: /var/run/dpdk/spdk3/config 00:32:24.680 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:24.680 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:24.680 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:24.680 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:24.680 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:32:24.680 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:32:24.680 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:32:24.680 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:32:24.680 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:24.680 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:24.680 Removing: /var/run/dpdk/spdk4/config 00:32:24.680 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:24.680 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:24.680 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:24.680 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:24.680 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:32:24.680 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:32:24.680 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:32:24.680 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:32:24.680 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:24.680 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:24.680 Removing: /dev/shm/bdev_svc_trace.1 00:32:24.938 Removing: /dev/shm/nvmf_trace.0 00:32:24.938 Removing: /dev/shm/spdk_tgt_trace.pid1275033 00:32:24.938 Removing: /var/run/dpdk/spdk0 00:32:24.938 Removing: /var/run/dpdk/spdk1 00:32:24.938 Removing: /var/run/dpdk/spdk2 00:32:24.938 Removing: /var/run/dpdk/spdk3 00:32:24.938 Removing: /var/run/dpdk/spdk4 00:32:24.938 Removing: /var/run/dpdk/spdk_pid1272873 00:32:24.938 Removing: /var/run/dpdk/spdk_pid1273966 00:32:24.938 Removing: /var/run/dpdk/spdk_pid1275033 00:32:24.938 Removing: /var/run/dpdk/spdk_pid1275664 00:32:24.938 Removing: /var/run/dpdk/spdk_pid1276615 00:32:24.938 Removing: /var/run/dpdk/spdk_pid1276854 00:32:24.938 Removing: /var/run/dpdk/spdk_pid1277833 00:32:24.938 Removing: /var/run/dpdk/spdk_pid1278035 00:32:24.938 Removing: /var/run/dpdk/spdk_pid1278185 00:32:24.938 Removing: /var/run/dpdk/spdk_pid1279703 00:32:24.938 Removing: /var/run/dpdk/spdk_pid1280958 00:32:24.938 Removing: /var/run/dpdk/spdk_pid1281236 00:32:24.938 Removing: /var/run/dpdk/spdk_pid1281527 00:32:24.938 Removing: /var/run/dpdk/spdk_pid1281830 00:32:24.938 Removing: /var/run/dpdk/spdk_pid1282120 00:32:24.938 Removing: /var/run/dpdk/spdk_pid1282374 00:32:24.938 Removing: /var/run/dpdk/spdk_pid1282620 00:32:24.938 Removing: /var/run/dpdk/spdk_pid1282895 00:32:24.938 Removing: /var/run/dpdk/spdk_pid1283859 00:32:24.939 Removing: /var/run/dpdk/spdk_pid1286869 00:32:24.939 Removing: /var/run/dpdk/spdk_pid1287127 00:32:24.939 Removing: /var/run/dpdk/spdk_pid1287394 00:32:24.939 Removing: /var/run/dpdk/spdk_pid1287616 00:32:24.939 Removing: /var/run/dpdk/spdk_pid1287969 00:32:24.939 Removing: /var/run/dpdk/spdk_pid1288122 00:32:24.939 Removing: /var/run/dpdk/spdk_pid1288612 00:32:24.939 Removing: /var/run/dpdk/spdk_pid1288625 00:32:24.939 Removing: /var/run/dpdk/spdk_pid1288889 00:32:24.939 Removing: /var/run/dpdk/spdk_pid1289121 00:32:24.939 Removing: /var/run/dpdk/spdk_pid1289377 00:32:24.939 Removing: /var/run/dpdk/spdk_pid1289395 00:32:24.939 Removing: /var/run/dpdk/spdk_pid1289944 00:32:24.939 Removing: /var/run/dpdk/spdk_pid1290195 00:32:24.939 Removing: /var/run/dpdk/spdk_pid1290480 00:32:24.939 Removing: /var/run/dpdk/spdk_pid1294166 00:32:24.939 Removing: /var/run/dpdk/spdk_pid1298644 00:32:24.939 Removing: /var/run/dpdk/spdk_pid1309255 00:32:24.939 Removing: /var/run/dpdk/spdk_pid1309861 00:32:24.939 Removing: /var/run/dpdk/spdk_pid1314116 00:32:24.939 Removing: /var/run/dpdk/spdk_pid1314370 00:32:24.939 Removing: /var/run/dpdk/spdk_pid1318628 00:32:24.939 Removing: /var/run/dpdk/spdk_pid1324502 00:32:24.939 Removing: /var/run/dpdk/spdk_pid1327323 00:32:24.939 Removing: /var/run/dpdk/spdk_pid1337750 00:32:24.939 Removing: /var/run/dpdk/spdk_pid1346675 00:32:24.939 Removing: /var/run/dpdk/spdk_pid1348607 00:32:24.939 Removing: /var/run/dpdk/spdk_pid1349926 00:32:24.939 Removing: /var/run/dpdk/spdk_pid1366783 00:32:24.939 Removing: /var/run/dpdk/spdk_pid1370617 00:32:24.939 Removing: /var/run/dpdk/spdk_pid1414374 00:32:24.939 Removing: /var/run/dpdk/spdk_pid1419621 00:32:24.939 Removing: /var/run/dpdk/spdk_pid1425971 00:32:24.939 Removing: /var/run/dpdk/spdk_pid1431978 00:32:24.939 Removing: /var/run/dpdk/spdk_pid1431980 00:32:24.939 Removing: /var/run/dpdk/spdk_pid1432867 00:32:24.939 Removing: /var/run/dpdk/spdk_pid1433605 00:32:24.939 Removing: /var/run/dpdk/spdk_pid1434513 00:32:24.939 Removing: /var/run/dpdk/spdk_pid1435120 00:32:24.939 Removing: /var/run/dpdk/spdk_pid1435192 00:32:24.939 Removing: /var/run/dpdk/spdk_pid1435430 00:32:24.939 Removing: /var/run/dpdk/spdk_pid1435437 00:32:24.939 Removing: /var/run/dpdk/spdk_pid1435461 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1436363 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1437275 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1438187 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1438662 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1438673 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1438985 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1440133 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1441333 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1449943 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1474540 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1479036 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1480765 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1482995 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1483233 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1483401 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1483574 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1484209 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1486048 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1487043 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1487615 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1489858 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1490576 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1491307 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1495357 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1505294 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1509340 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1515141 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1516625 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1517980 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1522498 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1527059 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1534513 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1534635 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1539128 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1539356 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1539582 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1540041 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1540050 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1544523 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1545080 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1549431 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1552183 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1557578 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1562907 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1571458 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1579000 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1579044 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1596579 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1597248 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1597946 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1598582 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1599399 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1600096 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1600788 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1601471 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1605734 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1605967 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1611928 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1612091 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1614314 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1622559 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1622564 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1627582 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1629551 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1631517 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1632565 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1634540 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1635815 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1644536 00:32:25.199 Removing: /var/run/dpdk/spdk_pid1644998 00:32:25.457 Removing: /var/run/dpdk/spdk_pid1645457 00:32:25.457 Removing: /var/run/dpdk/spdk_pid1647720 00:32:25.457 Removing: /var/run/dpdk/spdk_pid1648235 00:32:25.457 Removing: /var/run/dpdk/spdk_pid1648787 00:32:25.457 Removing: /var/run/dpdk/spdk_pid1652457 00:32:25.457 Removing: /var/run/dpdk/spdk_pid1652683 00:32:25.457 Removing: /var/run/dpdk/spdk_pid1654206 00:32:25.457 Removing: /var/run/dpdk/spdk_pid1654746 00:32:25.457 Removing: /var/run/dpdk/spdk_pid1654766 00:32:25.457 Clean 00:32:25.457 11:20:44 -- common/autotest_common.sh@1451 -- # return 0 00:32:25.457 11:20:44 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:32:25.457 11:20:44 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:25.457 11:20:44 -- common/autotest_common.sh@10 -- # set +x 00:32:25.457 11:20:44 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:32:25.457 11:20:44 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:25.457 11:20:44 -- common/autotest_common.sh@10 -- # set +x 00:32:25.457 11:20:44 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:25.457 11:20:44 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:32:25.457 11:20:44 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:32:25.457 11:20:44 -- spdk/autotest.sh@395 -- # hash lcov 00:32:25.457 11:20:44 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:25.457 11:20:44 -- spdk/autotest.sh@397 -- # hostname 00:32:25.457 11:20:44 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:32:25.715 geninfo: WARNING: invalid characters removed from testname! 00:32:47.645 11:21:04 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:47.904 11:21:07 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:49.809 11:21:09 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:51.715 11:21:10 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:53.619 11:21:12 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:55.522 11:21:14 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:56.898 11:21:16 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:57.157 11:21:16 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:57.157 11:21:16 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:32:57.157 11:21:16 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:57.157 11:21:16 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:57.157 11:21:16 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.157 11:21:16 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.157 11:21:16 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.157 11:21:16 -- paths/export.sh@5 -- $ export PATH 00:32:57.157 11:21:16 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.157 11:21:16 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:32:57.157 11:21:16 -- common/autobuild_common.sh@447 -- $ date +%s 00:32:57.157 11:21:16 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721985676.XXXXXX 00:32:57.157 11:21:16 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721985676.vM6RaO 00:32:57.157 11:21:16 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:32:57.157 11:21:16 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:32:57.157 11:21:16 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:32:57.157 11:21:16 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:32:57.157 11:21:16 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:32:57.158 11:21:16 -- common/autobuild_common.sh@463 -- $ get_config_params 00:32:57.158 11:21:16 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:32:57.158 11:21:16 -- common/autotest_common.sh@10 -- $ set +x 00:32:57.158 11:21:16 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:32:57.158 11:21:16 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:32:57.158 11:21:16 -- pm/common@17 -- $ local monitor 00:32:57.158 11:21:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:57.158 11:21:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:57.158 11:21:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:57.158 11:21:16 -- pm/common@21 -- $ date +%s 00:32:57.158 11:21:16 -- pm/common@21 -- $ date +%s 00:32:57.158 11:21:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:57.158 11:21:16 -- pm/common@25 -- $ sleep 1 00:32:57.158 11:21:16 -- pm/common@21 -- $ date +%s 00:32:57.158 11:21:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721985676 00:32:57.158 11:21:16 -- pm/common@21 -- $ date +%s 00:32:57.158 11:21:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721985676 00:32:57.158 11:21:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721985676 00:32:57.158 11:21:16 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721985676 00:32:57.158 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721985676_collect-vmstat.pm.log 00:32:57.158 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721985676_collect-cpu-load.pm.log 00:32:57.158 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721985676_collect-cpu-temp.pm.log 00:32:57.158 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721985676_collect-bmc-pm.bmc.pm.log 00:32:58.095 11:21:17 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:32:58.095 11:21:17 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:32:58.095 11:21:17 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:58.095 11:21:17 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:32:58.095 11:21:17 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:32:58.095 11:21:17 -- spdk/autopackage.sh@19 -- $ timing_finish 00:32:58.095 11:21:17 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:58.095 11:21:17 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:32:58.095 11:21:17 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:58.095 11:21:17 -- spdk/autopackage.sh@20 -- $ exit 0 00:32:58.095 11:21:17 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:32:58.095 11:21:17 -- pm/common@29 -- $ signal_monitor_resources TERM 00:32:58.095 11:21:17 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:32:58.095 11:21:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:58.095 11:21:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:32:58.095 11:21:17 -- pm/common@44 -- $ pid=1665365 00:32:58.095 11:21:17 -- pm/common@50 -- $ kill -TERM 1665365 00:32:58.095 11:21:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:58.095 11:21:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:32:58.095 11:21:17 -- pm/common@44 -- $ pid=1665366 00:32:58.095 11:21:17 -- pm/common@50 -- $ kill -TERM 1665366 00:32:58.095 11:21:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:58.095 11:21:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:32:58.095 11:21:17 -- pm/common@44 -- $ pid=1665369 00:32:58.095 11:21:17 -- pm/common@50 -- $ kill -TERM 1665369 00:32:58.095 11:21:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:58.095 11:21:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:32:58.095 11:21:17 -- pm/common@44 -- $ pid=1665395 00:32:58.095 11:21:17 -- pm/common@50 -- $ sudo -E kill -TERM 1665395 00:32:58.095 + [[ -n 1169063 ]] 00:32:58.095 + sudo kill 1169063 00:32:58.106 [Pipeline] } 00:32:58.126 [Pipeline] // stage 00:32:58.132 [Pipeline] } 00:32:58.150 [Pipeline] // timeout 00:32:58.156 [Pipeline] } 00:32:58.174 [Pipeline] // catchError 00:32:58.180 [Pipeline] } 00:32:58.198 [Pipeline] // wrap 00:32:58.204 [Pipeline] } 00:32:58.222 [Pipeline] // catchError 00:32:58.232 [Pipeline] stage 00:32:58.234 [Pipeline] { (Epilogue) 00:32:58.250 [Pipeline] catchError 00:32:58.252 [Pipeline] { 00:32:58.268 [Pipeline] echo 00:32:58.270 Cleanup processes 00:32:58.276 [Pipeline] sh 00:32:58.564 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:58.564 1665473 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:32:58.564 1665764 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:58.653 [Pipeline] sh 00:32:58.941 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:58.942 ++ grep -v 'sudo pgrep' 00:32:58.942 ++ awk '{print $1}' 00:32:58.942 + sudo kill -9 1665473 00:32:58.954 [Pipeline] sh 00:32:59.244 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:09.252 [Pipeline] sh 00:33:09.539 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:09.540 Artifacts sizes are good 00:33:09.557 [Pipeline] archiveArtifacts 00:33:09.565 Archiving artifacts 00:33:09.716 [Pipeline] sh 00:33:10.006 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:33:10.021 [Pipeline] cleanWs 00:33:10.032 [WS-CLEANUP] Deleting project workspace... 00:33:10.032 [WS-CLEANUP] Deferred wipeout is used... 00:33:10.039 [WS-CLEANUP] done 00:33:10.041 [Pipeline] } 00:33:10.061 [Pipeline] // catchError 00:33:10.075 [Pipeline] sh 00:33:10.361 + logger -p user.info -t JENKINS-CI 00:33:10.371 [Pipeline] } 00:33:10.389 [Pipeline] // stage 00:33:10.394 [Pipeline] } 00:33:10.411 [Pipeline] // node 00:33:10.417 [Pipeline] End of Pipeline 00:33:10.450 Finished: SUCCESS